Top Banner
577

Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

Mar 08, 2021

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck
Page 2: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

FFIRS 02/15/2012 12:14:28 Page 1

QuantitativeRisk

Management

Page 3: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

FFIRS 02/15/2012 12:14:28 Page 2

Founded in 1807, John Wiley & Sons is the oldest independent publishingcompany in the United States. With offices in North America, Europe,Australia, and Asia, Wiley is globally committed to developing and market-ing print and electronic products and services for our customers’ profes-sional and personal knowledge and understanding.

The Wiley Finance series contains books written specifically for financeand investment professionals as well as sophisticated individual investorsand their financial advisors. Book topics range from portfolio managementto e-commerce, risk management, financial engineering, valuation, andfinancial instrument analysis, as well as much more.

For a list of a vailable titles, please visit o ur website at w ww.WileyFinance.com.

Page 4: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

FFIRS 02/15/2012 12:14:28 Page 3

A Practical Guideto Financial Risk

THOMAS S. COLEMAN

John Wiley & Sons, Inc.

QuantitativeRisk

Management

Page 5: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

FFIRS 02/15/2012 12:14:28 Page 4

Copyright# 2012 by Thomas S. Coleman. All rights reserved.

Published by John Wiley & Sons, Inc., Hoboken, New Jersey.

Published simultaneously in Canada.

Chapters 1, 2, 3, 4, and parts of 6 were originally published as A Practical Guide to RiskManagement,# 2011 by the Research Foundation of CFA Institute.

Chapters 5, 7, 8, 9, 10, and 11 include figures, tables, and short excerpts that have been

modified or reprinted from A Practical Guide to Risk Management,# 2011 by the Research

Foundation of CFA Institute.

No part of this publication may be reproduced, stored in a retrieval system, or transmitted in

any form or by any means, electronic, mechanical, photocopying, recording, scanning, orotherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright

Act, without either the prior written permission of the Publisher, or authorization through

payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222

Rosewood Drive, Danvers, MA 01923, (978) 750–8400, fax (978) 646–8600, or on the Webat www.copyright.com. Requests to the Publisher for permission should be addressed to the

Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030,

(201) 748–6011, fax (201) 748–6008, or online at www.wiley.com/go/permissions.

Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best

efforts in preparing this book, they make no representations or warranties with respect to theaccuracy or completeness of the contents of this book and specifically disclaim any implied

warranties of merchantability or fitness for a particular purpose. No warranty may be created

or extended by sales representatives or written sales materials. The advice and strategies

contained herein may not be suitable for your situation. You should consult with aprofessional where appropriate. Neither the publisher nor author shall be liable for any loss of

profit or any other commercial damages, including but not limited to special, incidental,

consequential, or other damages.

For general information on our other products and services or for technical support, please

contact our Customer Care Department within the United States at (800) 762–2974,

outside the United States at (317) 572–3993 or fax (317) 572–4002.

Wiley also publishes its books in a variety of electronic formats. Some content that appears in

print may not be available in electronic books. For more information about Wiley products,visit our web site at www.wiley.com.

Library of Congress Cataloging-in-Publication Data:

Coleman, Thomas Sedgwick, 1955–

Quantitative risk management: a practical guide to financial risk/Thomas S. Coleman.

pages cm.—(Wiley finance series; 669)

Includes bibliographical references and index.ISBN 978–1–118–02658–8 (cloth); ISBN 978-1-118-26077-7 (ebk);

ISBN 978-1-118-22210-2 (ebk); ISBN 978-1-118-23593-5 (ebk)

1. Financial services industry—Risk management. 2. Financial risk management.

3. Capital market. I. Title.HG173.C664 2012

332.1068 01—dc232011048533

Printed in the United States of America

10 9 8 7 6 5 4 3 2 1

Page 6: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

FFIRS 02/15/2012 12:14:28 Page 5

To Lu and Jim, for making me who I am today.

Page 7: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

FFIRS 02/15/2012 12:14:28 Page 6

Page 8: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

FTOC 03/01/2012 12:23:31 Page 7

Contents

Foreword ix

Preface xiii

Acknowledgments xvii

PART ONE

Managing Risk 1

CHAPTER 1Risk Management versus Risk Measurement 3

CHAPTER 2Risk, Uncertainty, Probability, and Luck 15

CHAPTER 3Managing Risk 67

CHAPTER 4Financial Risk Events 101

CHAPTER 5Practical Risk Techniques 137

CHAPTER 6Uses and Limitations of Quantitative Techniques 169

vii

Page 9: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

FTOC 03/01/2012 12:23:31 Page 8

PART TWO

Measuring Risk 173

CHAPTER 7Introduction to Quantitative Risk Measurement 175

CHAPTER 8Risk and Summary Measures: Volatility and VaR 187

CHAPTER 9Using Volatility and VaR 269

CHAPTER 10Portfolio Risk Analytics and Reporting 311

CHAPTER 11Credit Risk 377

CHAPTER 12Liquidity and Operational Risk 481

CHAPTER 13Conclusion 529

About the Companion Web Site 531

References 533

About the Author 539

Index 541

viii CONTENTS

Page 10: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

FORE 01/28/2012 11:58:14 Page 9

Foreword

Having been the head of the risk management department at GoldmanSachs for four years (which I sadly feel obligated to note was many years

ago during a period when the firm was a highly respected private partner-ship), and having collaborated on a book called The Practice of Risk Man-agement, I suppose it is not a surprise that I have a point of view about thetopic of this book.

Thomas Coleman also brings a point of view to the topic of risk man-agement, and it turns out for better or for worse, we agree. A central themeof this book is that ‘‘in reality risk management is as much the art of manag-ing people, processes, and institutions as it is the science of measuring andquantifying risk.’’ I think he is absolutely correct.

This book’s title also highlights an important distinction that is some-times missed in large organizations. Risk measurement, per se, which is atask usually assigned to the ‘‘risk management’’ department, is in realityonly one input to the risk management function. As Coleman elaborates,‘‘Risk measurement tools . . . help one to understand current and pastexposures, a valuable and necessary undertaking but clearly not sufficientfor actually managing risk.’’ However, ‘‘The art of risk management’’which he notes is squarely the responsibility of senior management, ‘‘is notjust in responding to anticipated events, but in building a culture and orga-nization that can respond to risk and withstand unanticipated events. Inother words, risk management is about building flexible and robust pro-cesses and organizations.’’

The recognition that risk management is fundamentally about commu-nicating risk up and managing risk from the top leads to the next level ofinsight. In most financial firms different risks are managed by desks requir-ing very different metrics. Nonetheless, there must be a comprehensive andtransparent aggregation of risks and an ability to disaggregate and drilldown. And as Coleman points out, consistency and transparency in thisprocess are key requirements. It is absolutely essential that all risk takersand risk managers speak the same language in describing and understandingtheir risks.

ix

Page 11: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

FORE 01/28/2012 11:58:15 Page 10

Finally, Coleman emphasizes throughout that the management of riskis not a function designed to minimize risk. Although risk is usually a refer-ence to the downside of random outcomes, as Coleman puts it, risk manage-ment is about taking advantage of opportunities: ‘‘controlling the downsideand exploiting the upside.’’

In discussing the measurement of risk the key concept is, of course, thedistribution of outcomes. But Coleman rightly emphasizes that this distribu-tion is unknown, and cannot be summarized by a single number, such as ameasure of dispersion. Behavioral finance has provided many illustrationsof the fact that, as Coleman notes, ‘‘human intuition is not very good atworking with randomness and probabilities.’’ In order to be successful atmanaging risk, he suggests, ‘‘We must give up any illusion that there is cer-tainty in this world and embrace the future as fluid, changeable, andcontingent.’’

One of my favorite aspects of the book is its clever instruction onworking with and developing intuition about probabilities. Consider, forexample, a classic problem, that of interpreting medical test results. Cole-man considers the case of testing for breast cancer, a disease that afflictsabout one woman in twenty. The standard mammogram tests actuallyreport false positives about five percent of the time. In other words, awoman without cancer will get a negative result 95 percent of the timeand a positive result 5 percent of the time. Conditional on receiving apositive test result, a natural reaction is to assume the probability of hav-ing cancer is very high, close to 95 percent. In fact, that is not true. Con-sider that out of 1,000 women approximately 5 will have cancer.Approximately 55 will receive positive results. Thus, conditional on re-ceiving a positive test result the probability of having cancer is only about9 percent, not 95 percent. Using this example as an introduction, theauthor then develops the ideas of Bayesian updating of probabilities.

Although this book appropriately spends considerable effort describingquantitative risk measurement techniques, that task is not its true focus. Ittakes seriously its mission as a practical guide. For example, in turning tothe problem of managing risk, Coleman insightfully chooses as his firsttopic managing people, and the first issue addressed is the principal-agentproblem. According to Coleman, ‘‘Designing compensation and incentiveschemes has to be one of the most difficult and underappreciated, but alsoone of the most important, aspects of risk management.’’ Although he doesnot come to a definitive conclusion about how to structure employmentcontracts, he concludes, ‘‘careful thinking about preferences, incentives,compensation, and principal-agent problems enlightens many of the mostdifficult issues in risk management—issues that I think we as a professionhave only begun to address in a substantive manner.’’

x FOREWORD

Page 12: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

FORE 01/28/2012 11:58:15 Page 11

There are many well-known limitations to any attempt to quantifyrisk, and this book provides a useful cautionary list. Among the many con-cerns, Coleman highlights that ‘‘models for measuring risk will not includeall positions and all risks’’; ‘‘risk measures such as VaR and volatility arebackward looking’’; ‘‘VaR does not measure the ‘worst case’’’; ‘‘quantita-tive techniques are complex and require expertise and experience to useproperly’’; and finally, ‘‘quantitative risk measures do not properly repre-sent extreme events.’’ And perhaps most significantly, while he discussesmany of the events of the recent financial crisis, Coleman makes the usefuldistinction between idiosyncratic risk, which can be managed by a firm, ver-sus systemic risk which arises from an economy-wide event outside the con-trol of the firm. This book is focused on the former. Nonetheless, withrespect to the latter he concludes that ‘‘Systemic risk events . . . are farmore damaging because they involve substantial dislocations across a rangeof assets and across a variety of markets. Furthermore, the steps a firm cantake to forestall idiosyncratic risk events are often ineffective againstsystemic events.’’

Coleman brings to bear some of the recent insights from behavioralfinance, and in particular focuses on the problem of overconfidence,which is, in his words, ‘‘the most fundamental and difficult (issue) in allof risk management, because confidence is necessary for success, butoverconfidence can lead to disaster.’’ Later he elaborates, ‘‘Risk manage-ment . . . is also about managing ourselves. Managing our ego, managingour arrogance, our stubbornness, our mistakes. It is not about fancy quanti-tative techniques but about making good decisions in the face of uncer-tainty, scanty information, and competing demands.’’ In this contexthe highlights four characteristics of situations that can lead to risk manage-ment mistakes, familiarity, commitment, the herding instinct, andbelief inertia.

When focusing on the understanding and communication of risk,Coleman delves deeply into a set of portfolio analysis tools which Ihelped to develop and utilize while managing risk at Goldman Sachs.These tools, such as the marginal contribution to risk, risk triangles, besthedges, and the best replicating portfolio, were all designed to satisfy thepractical need to simplify and highlight the most important aspects ofinherently complex combinations of exposures. As we used to repeat of-ten, risk management is about communicating the right information tothe right people at the right time.

After covering the theory, the tools, and the practical application, Cole-man finally faces the unsatisfying reality that the future is never like thepast, and this is particularly true with respect to extreme events. His solu-tion is to recognize this limitation. ‘‘Overconfidence in numbers and

Foreword xi

Page 13: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

FORE 01/28/2012 11:58:15 Page 12

quantitative techniques, in our ability to represent extreme events, shouldbe subject to severe criticism, because it lulls us into a false sense of secu-rity.’’ In the end the firm relies not so much on the risk measurement toolsas the good judgment and wisdom of the experienced risk manager. AsColeman correctly concludes, ‘‘A poor manager with good risk reports isstill a poor manager. The real risk to an organization is in the unanticipatedor unexpected—exactly what the quantitative measures capture least welland what a good manager must strive to manage.’’

BOB LITTERMAN

Partner, Kepos Capital

xii FOREWORD

Page 14: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

FPREF 03/01/2012 12:27:49 Page 13

Preface

Risk management is the art of using lessons from the past to mitigatemisfortune and exploit future opportunities—in other words, the artof avoiding the stupid mistakes of yesterday while recognizing thatnature can always create new ways for things to go wrong.

This book grew out of a project for the Research Foundation of the CFAInstitute. The Research Foundation asked me to write a monograph, a

short and practical guide to risk management. I took the commission as alicense to write about how I think about risk. Ultimately the project grewfar beyond the original mandate and into this book, a book that is, I hope,still a practical guide to financial risk management.

In this book I lay out my view of risk management, a view that hasdeveloped over many years as a researcher, trader, and manager. My ap-proach is a little idiosyncratic because risk management itself suffers from asplit personality—one side soft management skills, the other side hardmathematics—and any attempt to treat both in the same book will by itsnature be something of an experiment. In writing this book I want to domore than just write down the mathematical formulae; I want to explainhow we should think about risk, what risk means, why we use a particularrisk measure. Most importantly, I want to challenge the accepted wisdomthat risk management is or ever should be a separate discipline; managingrisk is central to managing a financial firm and must remain the responsibil-ity of anyone who contributes to the profit of the firm.

I entered the financial industry as a trader on a swaps desk. On the deskwe lived by the daily and monthly profit and loss. There was nothing moreimportant for managing that P&L than understanding and managing therisk. Risk was around us every day and we needed to build and use practical

xiii

Page 15: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

FPREF 03/01/2012 12:27:49 Page 14

tools that could help us understand, display, report, and manage risk in allits complexity and variety.

The experience on a trading desk taught me that managing risk is thecentral part of a financial business. Managing risk is not something to bedelegated, not something to be handed over to a risk management depart-ment. The measurement of risk can certainly be technical and may requirequantitative expertise and a cadre of risk professionals, but the responsibil-ity for management ultimately resides with line managers, senior manage-ment, and the board. This lesson is as true for a commercial bank or aportfolio manager as for a trading desk. In any financial business, it is man-agers who must manage risk, and true risk management can never be de-volved to a separate department.

The necessity to manage risk in today’s complex markets leads to aninevitable tension between the management side and the quantitative side.Managers traditionally focus on people, process, institutions, incentives—all the components of managing a business. Risk professionals focus onmathematics, models, statistics, data—the quantitative side of the business.Successful performance in today’s markets requires that a firm bridge thissplit personality and integrate both management and quantitative skills.

This book tries to address both sides of the divide. Part One, comprisingChapters 1 through 6, focuses on the management side. I argue that manag-ing risk is as much about managing people, processes, and institutions as itis about numbers, and that a robust and responsive organization is the besttool for responding to a risky environment. But managers also need to becomfortable with quantitative issues: What is risk? How should we thinkabout uncertainty and randomness? What do the quantitative measuressuch as volatility and VaR mean? These are not just mathematical ques-tions. We need to understand the intuition behind the formulae and use ourknowledge to help make decisions.

Part One is not addressed at managers alone. Risk professionals, thosefocused on building the models and producing the numbers, need to under-stand how and why the numbers are used in managing risk. As Kendall andStuart so rightly say, ‘‘It’s not the figures themselves, it’s what you do withthem that matters.’’ Part One aims to lay out the common ground wheremanagers and risk professionals meet for the task of measuring and manag-ing risk.

Part Two changes gears to focus on the quantitative tools and tech-niques for measuring risk. Modern risk measurement is a quantitative field,often the preserve of specialists with mathematical training and expertise.There is no avoiding the statistics, mathematics, and computer technologynecessary for risk measurement in today’s markets. But we should not shyaway from these challenges. The ideas are almost always straightforward,

xiv PREFACE

Page 16: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

FPREF 03/01/2012 12:27:49 Page 15

even if the details are difficult. I try to be thorough in covering the theorybut also explain the ideas behind the theory. Throughout the book I workwith a consistent but simple portfolio to provide examples of key ideas andcalculations. Purchasers of the book can access many of these examplesonline to explore the concepts more fully.

Part Two is aimed primarily at risk professionals, those who need toknow the exact formula for calculating, say, the contribution to risk. Butmanagers can also use Part Two to learn more about the concepts behindrisk measurement. Chapters 9 and 10 in particular focus on examples andusing risk measurement tools. This book should serve as more than simplya reference on how to calculate volatility or learn what a generalized Paretodistribution is. My goal throughout is to find simple explanations for com-plex concepts—more than anything, I had to explain these concepts tomyself.

In the end, this book will be a success if readers come away with bothan appreciation of risk management as a management endeavor, and adeeper understanding of the quantitative framework for measuring risk. Ihope managers can use this to increase their quantitative skills and knowl-edge, and that risk professionals can use it to improve their understandingof how the numbers are used in managing the business.

Thomas S. ColemanGreenwich, CT

March 2012

Preface xv

Page 17: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

FPREF 03/01/2012 12:27:50 Page 16

Page 18: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

FACKNOW 02/15/2012 9:36:42 Page 17

Acknowledgments

I would like to thank those who helped make this book possible. Firstand foremost, thanks to Larry Siegel for his valuable insights, sugges-tions, and diligent editing of the initial Research Foundation manuscript.The Research Foundation of the CFA Institute made this project possiblewith its generous funding. Many others have contributed throughout theyears to my education in managing risk, with special thanks owed to myformer colleagues Gian Luca Ambrosio and Michael du Jeu—together welearned many of the world’s practical lessons. I thank all those fromwhom I have learned; the errors, unfortunately, remain my own.

xvii

Page 19: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

FACKNOW 02/15/2012 9:36:42 Page 18

‘‘You haven’t told me yet,’’ said Lady Nuttal, ‘‘what it is yourfianc�e does for a living.’’

‘‘He’s a statistician,’’ replied Lamia, with an annoying sense ofbeing on the defensive.

Lady Nuttal was obviously taken aback. It had not occurred toher that statisticians entered into normal social relationships.The species, she would have surmised, was perpetuated in somecollateral manner, like mules.

‘‘But Aunt Sara, it’s a very interesting profession,’’ said Lamiawarmly.

‘‘I don’t doubt it,’’ said her aunt, who obviously doubted it verymuch. ‘‘To express anything important in mere figures is so plainlyimpossible that there must be endless scope for well-paid advice onhow to do it. But don’t you think that life with a statistician wouldbe rather, shall we say, humdrum?’’

Lamia was silent. She felt reluctant to discuss the surprising depthof emotional possibility which she had discovered below Edward’snumerical veneer.

‘‘It’s not the figures themselves,’’ she said finally, ‘‘it’s what you dowith them that matters.’’

—K.A.C. Manderville, The Undoing of Lamia Gurdleneck,quoted in Kendall and Stuart (1979, frontispiece).

xviii ACKNOWLEDGMENTS

Page 20: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C01 01/28/2012 12:54:4 Page 1

PART One

Managing Risk

Page 21: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C01 01/28/2012 12:54:4 Page 2

Page 22: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C01 01/28/2012 12:54:4 Page 3

CHAPTER 1Risk Management versus

Risk Measurement

M anaging risk is at the core of managing any financial organization. Thisstatement may seem obvious, even trivial, but remember that the risk

management department is usually separate from trading management orline management. Words matter, and using the term risk management for agroup that does not actually manage anything leads to the notion that man-aging risk is somehow different from managing other affairs within the firm.Indeed, a director at a large financial group was quoted in the FinancialTimes as saying that ‘‘A board can’t be a risk manager.’’1 In reality, theboard has the same responsibility to understand and monitor the firm’s riskas it has to understand and monitor the firm’s profit or financial position.

To repeat, managing risk is at the core of managing any financial orga-nization; it is too important a responsibility for a firm’s managers to dele-gate. Managing risk is about making the tactical and strategic decisions tocontrol those risks that should be controlled and to exploit those opportuni-ties that can be exploited. Although managing risk does involve those quan-titative tools and activities generally covered in a risk managementtextbook, in reality, risk management is as much the art of managing peo-ple, processes, and institutions as it is the science of measuring and quanti-fying risk. In fact, one of the central arguments of this book is that riskmanagement is not the same as risk measurement. In the financial industryprobably more than any other, risk management must be a central responsi-bility for line managers from the board and CEO down through individualtrading units and portfolio managers. Managers within a financial organiza-tion must be, before anything else, risk managers in the true sense of manag-ing the risks that the firm faces.

1Guerrera and Larsen (2008).

3

Page 23: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C01 01/28/2012 12:54:4 Page 4

Extending the focus from the passive measurement and monitoringof risk to the active management of risk also drives one toward tools tohelp identify the type and direction of risks and tools to help identify hedgesand strategies that alter risk. It argues for a tighter connection betweenrisk management (traditionally focused on monitoring risk) and portfoliomanagement (in which one decides how much risk to take in the pursuitof profit).

Risk measurement is necessary to support the management of risk. Riskmeasurement is the specialized task of quantifying and communicating risk.In the financial industry, risk measurement has, justifiably, grown into aspecialized quantitative discipline. In many institutions, those focused onrisk measurement will be organized into an independent department withreporting lines separate from line managers.

Risk measurement has three goals:

1. Uncovering known risks faced by the portfolio or the firm. By knownrisks, I mean risks that can be identified and understood with study andanalysis because these or similar risks have been experienced in the pastby this particular firm or others. Such risks are often not obvious orimmediately apparent, possibly because of the size or diversity of aportfolio, but these risks can be uncovered with diligence.

2. Making the known risks easy to see, understand, and compare—inother words, the effective, simple, and transparent display and report-ing of risk. Value at risk, or VaR, is a popular tool in this arena, butthere are other, complementary, techniques and tools.

3. Trying to understand and uncover the unknown, or unanticipatedrisks—those that may not be easy to understand or anticipate, forexample, because the organization or industry has not experiencedthem before.

Risk management, as I just argued, is the responsibility of managers atall levels of an organization. To support the management of risk, risk mea-surement and reporting should be consistent throughout the firm, from themost disaggregate level (say, the individual trading desk) up to the top man-agement level. Risk measured at the lowest level should aggregate in a con-sistent manner to firmwide risk. Although this risk aggregation is never easyto accomplish, a senior manager should be able to view firmwide risk, butthen, like the layers of an onion or a Russian nesting doll, peel back thelayers and look at increasingly detailed and disaggregated risk. A uniformfoundation for risk reporting across a firm provides immense benefitsthat are not available when firmwide and desk-level risks are treated on adifferent basis.

4 QUANTITATIVE RISK MANAGEMENT

Page 24: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C01 01/28/2012 12:54:4 Page 5

1 .1 CONTRAST ING R I SK MANAGEMENT AND R ISKMEASUREMENT

The distinction I draw between risk management and risk measurementargues for a subtle but important change in focus from the standard riskmanagement approach: a focus on understanding and managing risk in ad-dition to the independent measurement of risk. The term risk management,unfortunately, has been appropriated to describe what should be termedrisk measurement: the measuring and quantifying of risk. Risk measure-ment requires specialized expertise and should generally be organized into adepartment separate from the main risk-taking units within the organiza-tion. Managing risk, in contrast, must be treated as a core competence of afinancial firm and of those charged with managing the firm. Appropriatingthe term risk management in this way can mislead one to think that the risktakers’ responsibility to manage risk is somehow lessened, diluting their re-sponsibility to make the decisions necessary to effectively manage risk.Managers cannot delegate their responsibilities to manage risk, and thereshould no more be a separate risk management department than thereshould be a separate profit management department.

The standard view posits risk management as a separate discipline andan independent department. I argue that risk measurement indeed requirestechnical skills and often should exist as a separate department. The riskmeasurement department should support line managers by measuring andassessing risk—in a manner analogous to the accounting department sup-porting line managers by measuring returns and profit and loss. It still re-mains line managers’ responsibility to manage the risk of the firm. Neitherrisk measurement experts nor line managers (who have the responsibilityfor managing risk) should confuse the measurement of risk with the man-agement of risk.

1 .2 REDE F I N I T I ON AND RE FOCUS FOR R I SKMANAGEMENT

The focus on managing risk argues for a modesty of tools and a boldness ofgoals. Risk measurement tools can go only so far. They help one to under-stand current and past exposures, which is a valuable and necessary under-taking but clearly not sufficient for actually managing risk. In contrast, thegoal of risk management should be to use the understanding provided byrisk measurement to manage future risks. The goal of managing risk withincomplete information is daunting precisely because quantitative risk mea-surement tools often fail to capture unanticipated events that pose the

Risk Management versus Risk Measurement 5

Page 25: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C01 01/28/2012 12:54:4 Page 6

greatest risk. Making decisions with incomplete information is partof almost any human endeavor. The art of risk management is not justin responding to anticipated events, but in building a culture and organiza-tion that can respond to risk and withstand unanticipated events. Inother words, risk management is about building flexible and robustprocesses and organizations with the flexibility to identify and respond torisks that were not important or recognized in the past, the robustnessto withstand unforeseen circumstances, and the ability to capitalize onnew opportunities.

Possibly the best description of my view of risk management comesfrom a book not even concerned with financial risk management, thedelightful Luck by the philosopher Nicholas Rescher (2001):

The bottom line is that while we cannot control luck [risk] throughsuperstitious interventions, we can indeed influence luck through the lessdramatic but infinitely more efficacious principles of prudence. In particu-lar, three resources come to the fore here:

1. Risk management: managing the direction of and the extent ofexposure to risk, and adjusting our risk-taking behavior in asensible way over the overcautious-to-heedless spectrum.

2. Damage control: protecting ourselves against the ravages ofbad luck by prudential measures, such as insurance, ‘‘hedgingone’s bets,’’ and the like.

3. Opportunity capitalization: avoiding excessive caution bypositioning oneself to take advantage of opportunities so as toenlarge the prospect of converting promising possibilities intoactual benefits. (p. 187)

1 .3 QUANT I TAT I V E MEASUREMENT AND ACONS IST ENT FRAMEWORK

The measurement of risk, the language of risk, seemingly even the definitionof risk itself—all these can vary dramatically across assets and across thelevels of a firm. Traders may talk about DV01 (dollar value of an 01) oradjusted duration for a bond, beta for an equity security, the notionalamount of foreign currency for a foreign exchange (FX) position, or thePandora’s box of delta, gamma, theta, and vega for an option. A risk man-ager assessing the overall risk of a firm might discuss the VaR, or expectedshortfall, or lower semivariance.

6 QUANTITATIVE RISK MANAGEMENT

Page 26: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C01 01/28/2012 12:54:4 Page 7

This plethora of terms is often confusing and seems to suggest substan-tially different views of risk. (I do not expect that the nonspecialist readerwill know what all these terms mean at this point. They will be defined asneeded.) Nonetheless, these terms all tackle the same question in one way oranother: What is the variability of profits and losses (P&L)? Viewing every-thing through the lens of P&L variability provides a unifying frameworkacross asset classes and across levels of the firm, from an individual equitytrader up through the board.

The underlying foundations can and should be consistent. Measuringand reporting risk in a consistent manner throughout the firm provides sub-stantial benefits. Although reporting needs to be tailored appropriately, it isimportant that the foundations—the way risk is calculated—be consistentfrom the granular level up to the aggregate level.

Consistency provides two benefits. First, senior managers can have theconfidence that when they manage the firmwide risk, they are actually man-aging the aggregation of individual units’ risks. Senior managers can drilldown to the sources of risk when necessary. Second, managers at the indi-vidual desk level can know that when there is a question regarding their riskfrom a senior manager, it is relevant to the risk they are actually managing.The risks may be expressed using different terminology, but when risk iscalculated and reported on a consistent basis, the various risks can be trans-lated into a common language.

An example will help demonstrate how the underlying foundations canbe consistent even when the language of risk is quite different across levelsof a firm. Consider the market risk for a very simple portfolio:

& $20 million nominal of a 10-year U.S. Treasury (UST) bond.& D7 million nominal of CAC 40 Index (French equity index) futures.

We can take this as a very simple example of a trading firm, with thebond representing the positions held by a fixed-income trading desk or in-vestment portfolio and the futures representing the positions held by anequity trading desk or investment portfolio. In a real firm, the fixed-incomeportfolio would have many positions, with a fixed-income trader or portfo-lio manager involved in the minute-to-minute management of the positions,and a similar situation would exist for the equity portfolio. Senior managerswould be responsible for the overall or combined risk but would not haveinvolvement in the day-to-day decisions.

Desk-level traders require a very granular view of their risk. They re-quire, primarily, information on the exposure or sensitivity of a portfolio tomarket risk factors. The fixed-income trader may measure exposure using

Risk Management versus Risk Measurement 7

Page 27: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C01 01/28/2012 12:54:4 Page 8

duration, DV01 (also called basis point value [BPV] or dollar duration), or5- or 10-year bond equivalents.2 The equity trader might measure the beta-equivalent notional of the position.

In all cases, the trader is measuring only the exposure or sensitivity—that is, how much the position makes or loses when the market moves aspecified amount. A simple report showing the exposure or sensitivity forthe fixed-income and equity portfolios might look like Table 1.1, whichshows the DV01 for the bond and the beta-equivalent holding for theequity. The DV01 of the bond is $18,288, which means that if the yield fallsby 1 basis point (bp), the profit will be $18,288.3 The beta-equivalent posi-tion of the equity holding is D7 million, or $9.1 million, in the CAC index.

Market P&L and the distribution of P&L are always the result of twoelements interacting: the exposure or sensitivity of positions to marketrisk factors and the distribution of the risk factors. The sample reports inTable 1.1 show only the first, the exposure to market risk factors. Desk-level traders will usually have knowledge of and experience with themarkets, intuitively knowing how likely large moves are versus smallmoves, and so already have an understanding of the distribution of marketrisk factors. They generally do not require a formal report to tell them howthe market might move but can form their own estimates of the distributionof P&L. In the end, however, it is the distribution of P&L that they use tomanage their portfolios.

A more senior manager, removed somewhat from day-to-day tradingand with responsibility for a wide range of portfolios, may not have the

TABLE 1.1 Sample Exposure Report

Yield Curve (per 1 bp down) Equity (beta-equivalent notional)

10-year par yield $18,288 CAC $9,100,000

2 Fixed-income exposure measures such as these are discussed in many texts, includ-ing Coleman (1998).3 Instead of the DV01 of $18,288, the exposure or sensitivity could be expressed asan adjusted or modified duration of 8.2 or five-year bond equivalent of $39 million.In all cases, it comes to the same thing: measuring how much the portfolio moves fora given move in market yields. The DV01 is the dollar sensitivity to a 1 bp move inyields, and the modified duration is the percentage sensitivity to a 100 bp move inyields. Modified duration can be converted to DV01 by multiplying the modifiedduration times the dollar holding (and dividing by 10,000 because the duration ispercent change per 100 bps and the DV01 is dollars per 1 bp). In this case, $20 mil-lion notional of the bond is worth $22.256 million, and 8.2 � 22,256,000/10,000 ¼$18,288 (within rounding).

8 QUANTITATIVE RISK MANAGEMENT

Page 28: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C01 01/28/2012 12:54:4 Page 9

same intimate and up-to-date knowledge as the desk-level trader for judgingthe likelihood of large versus small moves. The manager may require addi-tional information on the distribution of market moves.

Table 1.2 shows such additional information, the daily volatility or stan-dard deviation of market moves for yields and the CAC index. We see thatthe standard deviation of 10-year yields is 7.1 bps and of the CAC index is2.5 percent. This means that 10-year yields will rise or fall by 7.1 bps (ormore) and that the CAC index will move by 2.5 percent (or more) roughlyone day out of three. In other words, 7.1 bps provides a rough scale for bondmarket variability and 2.5 percent a rough scale for equity market volatility.

The market and exposure measures from Tables 1.1 and 1.2 can becombined to provide an estimate of the P&L volatility for the bond andequity positions, shown in Table 1.3.4

& Bond P&L volatility � $18,288 � 7.15 � $130,750& Equity P&L volatility � $9,100,000 � 0.0254 � $230,825

These values give a formal measure of the P&L variability or P&L dis-tribution: the standard deviation of the P&L distributions. The $130,750for the fixed-income portfolio means that the portfolio will make or loseabout $130,750 (or more) roughly one day out of three; $130,750 providesa rough scale for the P&L variability. Table 1.3 combines the informationin Tables 1.1 and 1.2 to provide information on the P&L distribution in alogical, comprehensible manner.

TABLE 1.2 Volatility or Standard Deviation of Individual Market Yield Moves

Yield Curve (bps per day) Equity (% per day)

10-year par yield 7.15 CAC 2.54

TABLE 1.3 Portfolio Sensitivity to One Standard Deviation Moves in SpecificMarket Risk Factors

Yield Curve (yield down) Equity (index up)

10-year par yield $130,750 CAC $230,825

4Assuming linearity as we do here is simple but not necessary. There are alternatemethodologies for obtaining the P&L distribution from the underlying positionexposures and market risk factors; the linear approach is used here for illustration.

Risk Management versus Risk Measurement 9

Page 29: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C01 01/28/2012 12:54:4 Page 10

A report such as Table 1.3 provides valuable information. Nonetheless,a senior manager will be most concerned with the variability of the overallP&L, taking all the positions and all possible market movements into ac-count. Doing so requires measuring and accounting for how 10-year yieldsmove in relation to equities—that is, taking into consideration the positionsin Table 1.1 and possible movements and co-movements, not just the vola-tilities of yields considered on their own as in Table 1.2.

For this simple two-asset portfolio, an estimate of the variability of theoverall P&L can be produced relatively easily. The standard deviation ofthe combined P&L will be5

Portfolio volatility �ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

Bond vol2 þ 2� r� Bond vol� Eq volþ Eq vol2q

¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

130;7502 þ 2� 0:24� 130;750� 230;825þ 230;8252p

� $291;300

ð1:1Þ

Diagrammatically, the situation might be represented by Figure 1.1.The separate portfolios and individual traders with their detailed exposurereports are represented on the bottom row. (In this example, we have onlytwo, but in a realistic portfolio there would be many more.) Individual trad-ers focus on exposures, using their knowledge of potential market moves toform an assessment of the distribution of P&L.

Managers who are more removed from the day-to-day trading may re-quire the combination of exposure and market move information to forman estimate of the P&L distributions. This is done in Table 1.3 and showndiagrammatically in the third row of Figure 1.1. Assessing the overall P&Lrequires combining the distribution of individual portfolios and assets intoan overall distribution—performed in Equation 1.1 and shown diagram-matically in the top row of Figure 1.1.6

The important point is that the goal is the same for all assets and at alllevels of the firm: measure, understand, and manage the P&L. This is as truefor the individual trader who studies bond DV01s all day as it is for theCEO who examines the firm-wide VaR.

5How volatilities combine is discussed more in Chapter 8. The correlation betweenbonds and the CAC equity is 0.24.6 For more complicated portfolios and for risk measures other than volatility (forexample, VaR or expected shortfall), the problem of combining multiple asset distri-butions into an overall distribution may be difficult but the idea is the same: Com-bine the individual positions to estimate the variability or dispersion of the overallP&L.

10 QUANTITATIVE RISK MANAGEMENT

Page 30: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C01 01/28/2012 12:54:4 Page 11

Ove

rall

P&L

Dis

trib

utio

n

Ind

ivid

ual P

osit

ion

Dis

trib

utio

ns(T

able

1.3

)

Mar

ket M

oves

(Tab

le 1

.2)

Exp

osur

es(T

able

1.1

)

Com

bine

Var

iety

of D

istr

ibut

ions

Ove

rall

Dis

trib

utio

nV

ol =

$29

1,30

0

Com

bine

Mar

ket

and

Exp

osur

e

Dis

trib

utio

nV

ol =

$13

0,75

0

Exp

osur

e$1

8,28

8

Mar

ket

Mov

es7.

1 bp

Com

bine

Mar

ket

and

Exp

osur

e

Dis

trib

utio

nV

ol =

$23

0,82

5

Exp

osur

e$9

.1M

Mar

ket

Mov

es2.

5%

Mar

ket

Mov

es

Com

bine

Mar

ket

and

Exp

osur

e

Dis

trib

utio

n

Exp

osur

e

FIGU

RE1.1

RepresentationofRiskReportingatVariousLevels

Note:M

¼million.

11

Page 31: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C01 01/28/2012 12:54:4 Page 12

The portfolio we have been considering is particularly simple and hasonly two assets. The exposure report, Table 1.1, is simple and easy to com-prehend. A more realistic portfolio, however, would have many assets withexposures to many market risk factors. For example, the fixed-income port-folio, instead of having a single DV01 of $18,288 included in a simple re-port like Table 1.1, might show exposure to 10 or 15 yield curve points foreach of five or eight currencies. A granular report used by a trader couldeasily have 30 or 50 or 70 entries—providing the detail necessary for thetrader to manage the portfolio moment by moment but proving to be con-fusing for anyone aiming at an overview of the complete portfolio.

The problem mushrooms when we consider multiple portfolios (say, agovernment trading desk, a swap trading desk, a credit desk, an equity desk,and an FX trading desk). A senior manager with overall responsibility formultiple portfolios requires tools for aggregating the risk, from simpleexposures to individual portfolio distributions up to an overall distribution.The process of aggregation shown in Figure 1.1 becomes absolutely neces-sary when the number and type of positions and subportfolios increase.

Building the risk and P&L distributions from the bottom up as shownin Figure 1.1 is easy in concept, even though it is invariably difficult in prac-tice. Equally or even more important, however, is going in the opposite di-rection: drilling down from the overall P&L to uncover and understand thesources of risk. This aspect of risk measurement is not always covered ingreat depth, but it is critically important. Managing the overall risk meansmaking decisions about what risks to take on or dispose of, and makingthose decisions requires understanding the sources of the risk.

Consistency in calculating risk measures, building from the dis-aggregate up to the aggregate level and then drilling back down, is criticallyimportant. It is only by using a consistent framework that the full benefits ofmanaging risk throughout the firm can be realized.

1 .4 SYSTEM IC VERSUS I D I OSYNCRAT I C R I SK

There is an important distinction, when thinking about risk, between whatwe might call idiosyncratic risk and systemic risk. This distinction is differ-ent from, although conceptually related to, the distinction between idiosyn-cratic and systemic (beta or market-wide) risk in the capital asset pricingmodel. Idiosyncratic risk is the risk that is specific to a particular firm, andsystemic risk is widespread across the financial system. The distinction be-tween the two is sometimes hazy but very important. Barings Bank’s 1995failure was specific to Barings (although its 1890 failure was related to amore general crisis involving Argentine bonds). In contrast, the failure of

12 QUANTITATIVE RISK MANAGEMENT

Page 32: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C01 01/28/2012 12:54:4 Page 13

Lehman Brothers and AIG in 2008 was related to a systemic crisis in thehousing market and wider credit markets.

The distinction between idiosyncratic and systemic risk is important fortwo reasons. First, the sources of idiosyncratic and systemic risk are differ-ent. Idiosyncratic risk arises from within a firm and is generally under thecontrol of the firm and its managers. Systemic risk is shared across firmsand is often the result of misplaced government intervention, inappropriateeconomic policies, or exogenous events, such as natural disasters. As a con-sequence, the response to the two sources of risk will be quite different.Managers within a firm can usually control and manage idiosyncratic risk,but they often cannot control systemic risk. More importantly, firms gener-ally take the macroeconomic environment as given and adapt to it ratherthan work to alter the systemic risk environment.

The second reason the distinction is important is that the consequencesare quite different. A firm-specific risk disaster is serious for the firm andindividuals involved, but the repercussions are generally limited to thefirm’s owners, debtors, and customers. A systemic risk management disas-ter, however, often has serious implications for the macroeconomy andlarger society. Consider the Great Depression of the 1930s, the developingcountries’ debt crisis of the late 1970s and 1980s, the U.S. savings and loancrisis of the 1980s, the Japanese crisis post-1990, the Russian default of1998, the various Asian crises of the late 1990s, and the worldwide crisis of2008, to mention only a few. These events all involved systemic riskand risk management failures, and all had huge costs in the form of direct(bailout) and indirect (lost output) costs.

It is important to remember the distinction between idiosyncratic andsystemic risk because in the aftermath of a systemic crisis, the two oftenbecome conflated in discussions of the crisis. Better idiosyncratic (individualfirm) risk management cannot substitute for adequate systemic (macro-economic and policy) risk management. Failures of risk management areoften held up as the primary driver of systemic failure. Although it is correctthat better idiosyncratic risk management can mitigate the impact ofsystemic risk, it cannot substitute for appropriate macroeconomic policy.Politicians—indeed, all of us participating in the political process—musttake responsibility for setting the policies that determine the incentives,rewards, and costs that shape systemic risk.

This book is about idiosyncratic risk and risk management—the risksthat an individual firm can control. The topic of systemic risk is vitallyimportant, but it is the subject for a different book—see, for example,the classic Manias, Panics, and Crashes: A History of Financial Crises byKindleberger (1989) or the more recent This Time Is Different: EightCenturies of Financial Folly by Reinhart and Rogoff (2009).

Risk Management versus Risk Measurement 13

Page 33: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C01 01/28/2012 12:54:4 Page 14

Page 34: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C02 02/14/2012 12:26:19 Page 15

CHAPTER 2Risk, Uncertainty, Probability,

and Luck

Managing risk requires thinking about risk, and thinking about riskrequires thinking about and being comfortable with uncertainty and

randomness. It turns out that, as humans, we are often poor at thinkingprobabilistically. We like certainty in our lives and thinking about random-ness does not come naturally; probability is often nonintuitive. We shouldnot abandon the effort, however; just as we can learn to ride a bike as achild we can learn to think probabilistically. Doing so opens horizons,allows us to embrace the fluid, uncertain nature of our world.

This chapter focuses on how to think about risk, uncertainty, and prob-ability. This chapter provides some of the tools we will use throughout therest of the book, but more importantly, it helps us move from the world asrigid and fixed to a world that is changeable and contingent, which helps usexplore the wonderful complexity of our world.

2 .1 WHAT IS R I SK?

Before asking, ‘‘What is risk management?’’ we need to ask, ‘‘What is risk?’’This question is not trivial; risk is a very slippery concept. To define risk, weneed to consider both the uncertainty of future outcomes and the utility orbenefit of those outcomes. When someone ventures onto a frozen lake, thatperson is taking a risk not just because the ice may break but because if itdoes break, the result will be bad. In contrast, for a frozen lake upon whichno one is trying to cross on foot, we would talk of the chance of ice break-ing; we would use the word risk only if the breaking ice had an impact onsomeone or something. Or, to paraphrase the philosopher George Berkeley,if a tree falls in the forest but there is nobody there for it to fall upon, isit risky?

15

Page 35: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C02 02/14/2012 12:26:19 Page 16

The word risk is usually associated with downside or bad outcomes, butwhen trying to understand financial risk, limiting the analysis to just thedownside would be a mistake. Managing financial risk is as much aboutexploiting opportunities for gain as it is about avoiding downside. It is truethat, everything else held equal, more randomness is bad and less random-ness is good. It is certainly appropriate to focus, as most risk measurementtexts do, on downside measures (for example, lower quantiles and VaR).But upside risk cannot be ignored. In financial markets, everything else isnever equal and more uncertainty is almost invariably associated with moreopportunity for gain. Upside risk might be better called opportunity, butdownside risk and upside opportunity are mirror images, and higher risk iscompensated by higher expected returns. Successful financial firms are thosethat effectively manage all risks: controlling the downside and exploitingthe upside.1

Risk combines both the uncertainty of outcomes and the utility or bene-fit of outcomes. For financial firms, the future outcomes are profits—P&Lmeasured in monetary units (that is, in dollars or as rates of return). Theassumption that only profits matter is pretty close to the truth because theprimary objective of financial firms is to maximize profits. Other things—status, firm ranking, jobs for life, and so on—may matter, but these are sec-ondary and are ignored here.

Future outcomes are summarized by P&L, and the uncertainty in prof-its is described by the distribution or density function. The distribution anddensity functions map the many possible realizations for the P&L, withprofits sometimes high and sometimes low. Figure 2.1 shows the possibleP&L from a $10 coin toss bet (only two possible outcomes) and from a

+$10–$10 $0 ProfitLoss

A. Coin Toss Bet B. Hypothetical Yield Curve Strategy

FIGURE 2.1 P&L from Coin Toss Bet and Hypothetical Yield Curve Strategy

1Gigerenzer (2002, 26) emphasizes the importance of thinking of risk as both posi-tive and negative.

16 QUANTITATIVE RISK MANAGEMENT

Page 36: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C02 02/14/2012 12:26:19 Page 17

hypothetical yield curve strategy (many possible outcomes). The verticalaxis measures the probability of a particular outcome, and the horizontalaxis measures the level of profit or loss. For the coin toss, each outcome hasa probability of one-half. For the yield curve strategy, there is a range ofpossible outcomes, each with some probability. In the end, however, whatmatters is the distribution of P&L—how much one can make or lose.

The distribution function contains all the objective information aboutthe random outcomes, but the benefit (positive or negative) provided by anygiven level of profit or loss depends on an investor’s preferences or utilityfunction—how much an investor values each positive outcome and howmuch he is averse to each negative one. Whether one distribution is rankedhigher than another (one set of outcomes is preferred to another) dependson an investor’s preferences.

Generally, there will be no unique ranking of distributions in the sensethat distribution F is preferred to distributionG by all investors. Although itis true that in certain cases we can say that distribution F is unambiguouslyless risky than G, these cases are of limited usefulness. As an example, con-sider the two distributions in Panel A of Figure 2.2. They have the samemean, but distribution F has lower dispersion and a density function that isinside G. Distribution G will be considered worse and thus riskier by allrisk-averse investors.2

More often, there will be no unique ranking, and some investorswill prefer one distribution while others will prefer another. Panel B ofFigure 2.2 shows two distributions: H with less dispersion but lower meanand K with more dispersion but higher mean. A particular investor coulddetermine which distribution is worse given her own preferences, and someinvestors may preferH while others prefer K, but there is no unique rankingof which is riskier.

The bottom line is that the riskiness of a distribution will depend onthe particular investor’s preferences. There is no unique risk ranking for alldistributions and all investors. To rank distributions and properly definerisk, preferences must be introduced.

2Technically, the distribution F is said to dominateG according to second-order sto-chastic dominance. For a discussion of stochastic dominance, see the essay by HaimLevy in Eatwell, Milgate, and Newman (1987, The New Palgrave, vol. 4, 500–501)or on the Internet (New School, undated). In practice, distributions F and G rarelyexist simultaneously in nature because the price system ensures that they do not. Be-cause virtually anyone would considerG worse than F, the asset with distribution Gwould have to go down in price—thus ensuring that the expected return (mean)would be higher.

Risk, Uncertainty, Probability, and Luck 17

Page 37: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C02 02/14/2012 12:26:19 Page 18

Markowitz (1959) implicitly provided a model of preferences when heintroduced the mean-variance portfolio allocation framework that is nowpart of our financial and economic heritage. He considered a hypotheticalinvestor who places positive value on the mean or expected return and neg-ative value on the variance (or standard deviation) of return. For this inves-tor, the trade-off between sets of outcomes depends only on the mean andvariance. Risk is usually equated to variance in this framework because var-iance uniquely measures the disutility resulting from greater dispersion inoutcomes.

In the mean-variance Markowitz framework, the problem is reduced todeciding on the trade-off between mean and variance (expected reward andrisk). The exact trade-off will vary among investors, depending on their rel-ative valuation of the benefit of mean return and the cost of variance. Even

B. Without Unique Risk Ranking

A. With Unique Risk Ranking

Mean

F

G

ProfitsLosses

H

K

Mean ProfitsLosses

FIGURE 2.2 Distributions with and withoutUnique Risk Ranking

18 QUANTITATIVE RISK MANAGEMENT

Page 38: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C02 02/14/2012 12:26:19 Page 19

here, the variance uniquely ranks distributions on a preference scale onlywhen the means are equal. In Figure 2.2, Panel B, distribution K might bepreferred to H by some investors, even though K has a higher variance(K also has a higher mean). Even when limiting ourselves to quadratic util-ity, we must consider the precise trade-off between mean and variance.

Markowitz’s framework provides immense insight into the investmentprocess and portfolio allocation process, but it is an idealized model. Riskcan be uniquely identified with standard deviation or volatility of returnsonly when returns are normally distributed (so that the distribution is fullycharacterized by the mean and standard deviation) or when investors’ utilityis quadratic (so they care only about mean and standard deviation, even ifdistributions differ in other ways [moments]).

Although risk properly depends on both the distribution and investorpreferences, for the rest of this book I focus on the distribution and largelyignore preferences. Preferences are difficult to measure and vary from oneinvestor to another. Importantly, however, I do assume that preferences de-pend only on P&L: If we know the whole P&L distribution, we can apply itto any particular investor’s preferences. Thus, as a working definition ofrisk for this book, I use the following: Risk is the possibility of P&L beingdifferent from what is expected or anticipated; risk is uncertainty or ran-domness measured by the distribution of future P&L. This statement is rel-atively general and, effectively, evades the problem of having to considerpreferences or the utility of future outcomes, and it achieves the simplifica-tion necessary for a fruitful discussion of risk measurement and risk man-agement to proceed.3

2 .2 R I SK MEASURES

One important consequence of viewing risk as the distribution of futureP&L is that risk is multifaceted and cannot be defined as a single number;we need to consider the full distribution of possible outcomes. In practice,however, we will rarely know or use the full P&L distribution. We will usu-ally use summary measures that tell us things about the distribution becausethe full distribution is too difficult to measure or too complicated to easilygrasp or because we simply want a convenient way to summarize thedistribution.

3 If we know the whole distribution, we can apply that to any particular investor’spreferences to find the utility of the set of P&L outcomes. Thus, focusing on the fulldistribution means we can evade the issue of preferences.

Risk, Uncertainty, Probability, and Luck 19

Page 39: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C02 02/14/2012 12:26:19 Page 20

These summary measures can be called risk measures: numbers thatsummarize important characteristics of the distribution (risk). The first ormost important characteristic to summarize is the dispersion, or spread, ofthe distribution. The standard deviation is the best-known summary mea-sure for the spread of a distribution, and it is an incredibly valuable riskmeasure. (Although it sometimes does not get the recognition it deservesfrom theorists, it is widely used in practice.) But plenty of other measurestell us about the spread, the shape, or other specific characteristics of thedistribution.

Summary measures for distribution and density functions are com-mon in statistics. For any distribution, the first two features that are ofinterest are location, on the one hand, and scale (or dispersion), on theother. Location quantifies the central tendency of some typical value,and scale or dispersion quantifies the spread of possible values aroundthe central value. Summary measures are useful but somewhat arbitrarybecause the properties they are trying to measure are somewhat vague.4

For risk measurement, scale is generally more important than location,primarily because the dispersion of P&L is large relative to the typicalvalue.5

Figure 2.3 shows the P&L distribution (more correctly, the densityfunction) for a hypothetical bond portfolio. The distribution is fairly wellbehaved, being symmetrical and close to normal or Gaussian. In this case,the mean of the distribution is a good indication of the central tendency ofthe distribution and serves as a good measure of location. The standard de-viation gives a good indication of the spread or dispersion of the distribu-tion and is thus a good measure of scale or dispersion.

Particular measures work well in particular cases, but in general, onesingle number does not always work well for characterizing either locationor scale. It is totally misleading to think there is a single number that is therisk, that risk can be summarized by a single number that works in all cases

4 See, for example, Cram�er (1974), sections 15.5 and 15.6. The following commentsare appropriate: ‘‘All measures of location and dispersion, and of similar properties,are to a large extent arbitrary. This is quite natural, since the properties to be de-scribed by such parameters are too vaguely defined to admit of unique measurementby means of a single number. Each measure has advantages and disadvantages of itsown, and a measure which renders excellent service in one case may be more or lessuseless in another’’ (pp. 181–182).5 For the S&P 500 Index, the daily standard deviation is roughly 1.2 percent and theaverage daily return is only 0.03 percent (calculated from Ibbotson Associates datafor 1926 to 2007, which show the annualized mean and standard deviation formonthly capital appreciation returns are 7.41 percent and 19.15 percent).

20 QUANTITATIVE RISK MANAGEMENT

Page 40: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C02 02/14/2012 12:26:19 Page 21

for all assets and for all investors. Risk is multifaceted. There are better andworse numbers, some better or worse in particular circumstances, but it willalmost never be the case (except for textbook examples such as normalityor quadratic utility) that a single number will suffice. Indeed, the all-too-common tendency to reduce risk to a single number is part of the ‘‘illusionof certainty’’ (to use a phrase from Gigerenzer 2002) and epitomizes thedifficulty of thinking about uncertainty, to which I turn next.

2 .3 RANDOMNESS AND THE I L LUS I ONOF CERTA INTY

Thinking about uncertainty and randomness is hard, if only because it ismore difficult to think about what we do not know than about what we do.Life would be easier if risk could be reduced to a single number, but it can-not be. There is a human tendency and a strong temptation to distill futureuncertainty and contingency down to a single, definitive number, providingthe illusion of certainty. But many mistakes and misunderstandings ensuewhen one ignores future contingency and relies on a fixed number to repre-sent the changeable future. The search for a single risk number is an exam-ple of the human characteristic of trying to reduce a complex, multifacetedworld to a single factor.

To understand, appreciate, and work with risk, we have to move awayfrom rigid, fixed thinking and expand to consider alternatives. We must giveup any illusion that there is certainty in this world and embrace the futureas fluid, changeable, and contingent. In the words of Gigerenzer (2002),‘‘Giving up the illusion of certainty enables us to enjoy and explore thecomplexity of the world in which we live’’ (p. 231).

Mean (location) = 0

Standard Deviation(scale or dispersion)= $130,800

FIGURE 2.3 P&L Distribution for Hypothetical BondPortfolio

Risk, Uncertainty, Probability, and Luck 21

Page 41: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C02 02/14/2012 12:26:20 Page 22

D i f fi cu l t i e s w i t h Human I n t u i t i o n

Randomness pervades our world, but human intuition is not very good atworking with randomness and probabilities. Experience and training do notalways groom us to understand or live comfortably with uncertainty. Infact, a whole industry and literature are based on studying how people makemistakes when thinking about and judging probability. In the 1930s,‘‘researchers noted that people could neither make up a sequence of [ran-dom] numbers . . . nor recognize reliably whether a given string was ran-domly generated’’ (Mlodinow 2008, ix). The best-known academic researchin this area is by the psychologists Daniel Kahneman and Amos Tversky.6

Kahneman and Tversky did much to develop the idea that people useheuristics (rules of thumb or shortcuts for solving complex problems) whenfaced with problems of uncertainty and randomness. They found that heu-ristics lead to predictable and consistent mistakes (cognitive biases). Theyworked together for many years, publishing important early work in the1970s. Kahneman received the 2002 Nobel Prize in Economic Sciences ‘‘forhaving integrated insights from psychological research into economicscience, especially concerning human judgment and decision-making underuncertainty.’’7 (Tversky died in 1996, and the Nobel Prize is not awardedposthumously.)

One oft-cited experiment shows the difficulty in thinking about ran-domness and probability. Subjects were asked to assess the probability ofstatements about someone’s occupation and interests given informationabout the person’s background and character.8 In the experiment, Tverskyand Kahneman presented participants with a description of Linda—31years old, single, outspoken, and very bright. In college, Linda majored inphilosophy, was deeply concerned with discrimination and social justice,and participated in antinuclear demonstrations. The experiment partici-pants were then asked to rank the probability of three possible descriptionsof Linda’s current occupation and interests (that is, extrapolating forwardfrom Linda’s college background to her current status):

A. Linda is a bank tellerB. Linda is active in the feminist movementC. Linda is a bank teller and is active in the feminist movement

6 See, for example, Kahneman and Tversky (1973) and Tversky and Kahneman(1974).7 http://nobelprize.org/nobel_prizes/economics/laureates/2002/.8 See Kahneman, Slovic, and Tversky (1982, 90–98) for the original reference. Thepresent description is a somewhat abbreviated version of that in Mlodinow (2008).

22 QUANTITATIVE RISK MANAGEMENT

Page 42: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C02 02/14/2012 12:26:20 Page 23

Eighty-seven percent of the subjects ranked the probability of bankteller and feminist together higher than bank teller alone (in other words,they ranked C, which is both A and B together, above A alone). But this ismathematically impossible. Whatever Linda’s current employment and in-terests are, the probability that Linda is both a bank teller and also an activefeminist (C—that is, A and B together) cannot be higher than the probabil-ity of her being just a bank teller. No matter what the particulars, the prob-ability of A and B together is never higher than the probability of A alone.Another way to see this problem is to note that the total universe of banktellers is much larger than the subset of bank tellers who are also active fem-inists, so it has to be more likely that someone is a bank teller than that sheis a bank teller who is also an active feminist.

FURTHER THOUGHTS ABOUT L INDATHE BANK TE L L ER

The bank teller/feminist combination may be less likely, yet psycho-logically it is more satisfying. The explanation possibly lies in oureveryday experience and in the tasks we practice regularly. Theessence of Kahneman and Tversky’s experiment is to take Linda’s col-lege life and make probability statements about her future occupation.We do not commonly do this. We more frequently do the reverse:meet new acquaintances about whom we have limited informationand then try to infer more about their character and background. Inother words, it would be common to meet Linda at age 31, find outher current status, and make probability inferences about her collegelife. The likelihood that Linda had the college background ascribed toher would be much higher if she were currently a bank teller and ac-tive feminist than if she were a bank teller alone. In other words,P[college lifejbank teller & feminist] > P[college lifejbank teller], andP[bank teller & feministjcollege life] < P[bank tellerjcollege life].It may be that we are good at solving the more common problem,whether through practice or innate psychological predisposition, andfail to account for the unusual nature of the problem presented in theexperiment. We think we are solving the familiar problem, not the un-familiar one. This explanation would be consistent with anotherKahneman and Tversky experiment (Tversky and Kahneman 1983;Mlodinow 2008, 25) in which doctors are essentially asked to predict

(continued )

Risk, Uncertainty, Probability, and Luck 23

Page 43: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C02 02/14/2012 12:26:20 Page 24

Such mistakes are not uncommon. Kahneman and Tversky developedthe concepts of representativeness, availability of instances or scenarios,and adjustment from an anchor as three heuristics that people use to solveprobability problems and deal with uncertainty.9 These heuristics often leadto mistakes or biases, as seen in the Linda example. The fields of behavioraleconomics and behavioral finance are in large part based on their work, andtheir work is not limited to the academic arena. Many books have popu-larized the idea that human intuition is not well suited to dealing with ran-domness. Taleb (2004, 2007) is well known, but Gigerenzer (2002) andMlodinow (2008) are particularly informative.

Probab i l i t y I s No t I n t u i t i v e

Thinking carefully about uncertainty and randomness is difficult but genu-inely productive. The fact is that dealing with probability and randomness ishard and sometimes just plain weird. Mlodinow (2008), from which the de-scription of the Linda experiment is taken, has further examples. But oneparticularly nice example of how probability problems are often nonintui-tive is the classic birthday problem. It also exhibits the usefulness of proba-bility theory in setting our intuition straight.

(continued )symptoms based on an underlying condition. Doctors are usuallytrained to do the reverse: diagnose underlying conditions based onsymptoms.

Alternatively, the explanation may be in how the problem isposed. Possibly when we read C (‘‘bank teller and feminist’’), we un-consciously impose symmetry on the problem and reinterpret A as‘‘bank teller and nonfeminist.’’ Given the information we have aboutLinda, it would be reasonable to assign a higher probability to C thanthe reinterpreted A. Perhaps the experimental results would change ifwe chose a better formulation of the problem—for example, by stat-ing A as ‘‘Linda is a bank teller, but you do not know if she is active inthe feminist movement’’ because this restatement would make it veryexplicit that C is, in a sense, a subset of A.

The argument about heuristics (how we think about problems)and how a problem is posed is related to Gigerenzer (2002) anddiscussed in more detail later.

9 See Tversky and Kahneman (1974).

24 QUANTITATIVE RISK MANAGEMENT

Page 44: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C02 02/14/2012 12:26:20 Page 25

The birthday problem is discussed in many texts, with the stimulatingbook by Aczel (2004) being a particularly good presentation. The problemis simple to state: What is the probability that if you enter a room with20 people, 2 of those 20 will share the same birthday (same day of the year,not the same year)? Most people would say the probability is small becausethere are, after all, 365 days to choose from. In fact, the probability is justover 41 percent, a number that I always find surprisingly high. And it onlytakes 56 people to raise the probability to more than 99 percent. As Aczelput it:

When fifty-six people are present in a room, there is a ninety-ninepercent probability that at least two of them share a birthday! Howcan we get so close to certainty when there are only fifty-six peopleand a total of three hundred and sixty-five possible days ofthe year? Chance does seem to work in mysterious ways. If youhave three hundred and sixty-five open boxes onto which fifty-sixballs are randomly dropped, there is a ninety-nine percent chancethat there will be at least two balls in at least one of the boxes.Why does this happen? No one really has an intuition for suchthings. The natural inclination is to think that because there areover three hundred empty boxes left over after fifty-six balls aredropped, no two balls can share the same spot. The mathematicstells us otherwise, and reality follows the mathematics. In nature,we find much more aggregation—due to pure randomness—thanwe might otherwise suspect. (pp. 71–72)10

Another example of how intuition can mislead and where probability isnot intuitive is in assessing streaks, or runs. Random sequences will exhibitclustering, or bunching (e.g., runs of multiple heads in a sequence of coinflips), and such clustering often appears to our intuition to be nonrandom.The random shuffle on an iPod has actually been adjusted so it appears to usas more random. When the iPod was originally introduced, the random or-der of songs would periodically produce repetition and users hearing thesame song or artist played back to back believed the shuffling was not ran-dom. Apple altered the algorithm to be ‘‘less random to make it feel more

10 Feller (1968, 33) also discusses the problem and gives approximations to the prob-ability that two or more people in a group of size r have the same birthday. For asmall r (say, around 10), P[2 or more with same birthday]� r(r – 1)/730. For a largerr (say, 15 or more), P[2 or more with same birthday] � 1 – exp[–r(r – 1)/730]. Thesework quite well. For r ¼ 23 people, the true probability is 0.507 and the approxima-tion is 0.500, and for r ¼ 56, the true is 0.988 and the approximation is 0.985.

Risk, Uncertainty, Probability, and Luck 25

Page 45: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C02 02/14/2012 12:26:20 Page 26

random,’’ according to Steve Jobs.11 The clustering of random sequences isalso why subrandom or quasi-random sequences are used for Monte Carlosimulation and Monte Carlo numerical integration; these sequences fill thespace to be integrated more uniformly.12

To appreciate how runs can mislead, consider observing 10 heads in arow when flipping a coin. Having 10 in a row is unlikely, with a probabilityof 1 in 1,024, or 0.098 percent. Yet, if we flip a coin 200 times, there is a17 percent chance we will observe a run of either 10 heads or 10 tails.13

Runs or streaks occur in real life, and we need to be very careful in in-terpreting such streaks. As the example of 10 heads shows, unlikely eventsdo occur in a long-repeated process. A very practical example, highly rele-vant to anyone interested in risk management, is that of Bill Miller, portfo-lio manager of Legg Mason Value Trust Fund. Through the end of 2005,Bill Miller had a streak of 15 years of beating the S&P 500,14 which is anextraordinary accomplishment, but is it caused by skill or simply luck? Wewill see that it could easily be entirely because of luck.

The likelihood of a single fund beating the S&P 500 for 15 years in arow is low. Say we choose one particular fund, and let us assume that thefund has only a 50/50 chance of beating the index in a given year (sothat no exceptional skill is involved, only luck). The probability ofthat fund beating the index for the next 15 years is only 1 in 32,768 or0.003 percent—very low.

But 0.003 percent is not really the relevant probability. We didnot select the Value Trust Fund before the streak and follow just that onefund; we are looking back and picking the one fund out of many that had astreak. The streak may have been caused by exceptional skill, but it mayalso have been caused by our looking backward and considering the onelucky fund that did exceptionally well. Among many funds, one fund willalways be particularly lucky, even if we could not say beforehand whichfund that would be.

When we look at many funds, how exceptional would it be to observea streak of 15 years? Say that only 1,000 funds exist (clearly an under-estimate), that each fund operates independently, and that each fund has a50/50 chance of beating the index in a particular year. What would be the

11 See Mlodinow (2008, 175) and Maslin (2006).12 For a discussion of subrandom sequences, see, for example, Press, Teukolsky,Vetterling, and Flannery (2007, section 7.8).13 I use simulation to arrive at this answer; I do not know of any simple formula forcalculating the probability of such a run.14The discussion of results through 2005 follows Mlodinow (2008).

26 QUANTITATIVE RISK MANAGEMENT

Page 46: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C02 02/14/2012 12:26:20 Page 27

chance that, over 15 years, we would see at least 1 of those 1,000 funds witha 15-year streak? It turns out to be much higher than 1 in 32,768—roughly1 in 30, or 3 percent.15 Therefore, observing a 15-year streak among a poolof funds is not quite so exceptional.

But we are not done yet. Commentators reported in 2003 (earlier in thestreak) that ‘‘no other fund has ever outperformed the market for a dozenconsecutive years over the last 40 years.’’16 We really should consider theprobability that some fund had a 15-year streak during, say, the last40 years. What would be the chance of finding one fund out of a startingpool of 1,000 that had a 15-year streak sometime in a 40-year period? Thisscenario gives extra freedom because the streak could be at the beginning,middle, or end of the 40-year period. It turns out that the probability is nowmuch higher, around 33 percent. In other words, the probability of observ-ing such a streak, caused purely by chance, is high.17

The point of this exercise is not to prove that Bill Miller has only aver-age skill. Possibly he has extraordinary skill, possibly not. The point is thata 15-year streak, exceptional as it sounds, does not prove that he has extra-ordinary skill. We must critically evaluate the world and not be misled byruns, streaks, or other quirks of nature. A streak like Bill Miller’s soundsextraordinary. But before we get carried away and ascribe extraordinaryskill to Bill Miller, we need to critically evaluate how likely such a streak is

15 If each fund has probability p of outperforming in a year (in our case, p ¼ 0.5),then the probability that one fund has a streak of 15 years is p15 ¼ 0.000031 becauseperformance across years is assumed to be independent and we multiply the proba-bility of independent events to get the joint probability (one of the laws of probabil-ity—see Aczel 2004, ch. 4, or Hacking 2001, ch. 6). Thus, the probability that thefund does not have a streak is 1 – p15 ¼ 0.999969. Each fund is independent, so for1,000 funds, the probability that no fund has a streak is (1 – p15)1,000 ¼ 0.9699(again, we multiply independent events), which means the probability that at leastone fund has a streak is 1 – 0.9699 ¼ 0.0301.16Mauboussin and Bartholdson (2003, quoted in Mlodinow 2008, 180).17 I arrive at 33 percent by simulating the probability that a single fund would have a15-year (or longer) run in 40 years (p ¼ 0.000397) and then calculating the proba-bility that none of 1,000 identical and independent funds would have a 15-yearstreak [(1 – p15)1,000 ¼ 0.672]. Thus, the probability that at least one fund has astreak is (1 – 0.672 ¼ 0.328). Mlodinow (2008, 181) arrives at a probability ofroughly 75 percent. Mlodinow may have assumed a more realistic pool of funds—say, 3,500, which would give a probability of 75 percent for at least one streak.Whether the probability is 33 percent or 75 percent, however, does not matter forthe point of the argument because either way the probability is high.

Risk, Uncertainty, Probability, and Luck 27

Page 47: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C02 02/14/2012 12:26:21 Page 28

due to pure chance. We have seen that it is rather likely. Bill Miller mayhave exceptional skill, but the 15-year streak does not, on its own, provethe point.18

PROBAB I L I TY PARADOXES AND PUZ Z L ES :A LONG D IGR ESS ION

There are many probability paradoxes and puzzles. In this long digres-sion, I explore random walks and the ‘‘Monty Hall problem.’’19

RANDOM WALKS

One interesting and instructive case of a probability paradox is that ofrandom walks—specifically, the number of changes of sign and thetime in either positive or negative territory.

The simplest random walk is a process in which, each period, acounter moves up or down by one unit with a probability of half foreach. (This example is sometimes colloquially referred to as thedrunkard’s walk, after a drunkard taking stumbling steps from alamppost—sometimes going forward and sometimes back but eachstep completely at random.) A random walk is clearly related to thebinomial process and Bernoulli trials because each period is up ordown—in other words, an independent Bernoulli trial with probabil-ity p ¼ 1/2.

Random walks provide an excellent starting point for describingmany real-life situations, from gambling to the stock market. If werepeatedly toss a fair coin and count the number of heads minus thenumber of tails, this sequence is a simple random walk. The count(number of heads minus number of tails) could represent a simplegame of chance: If we won $1 for every heads and lost $1 for everytails, the count would be our total winnings. With some elaborations

18As a side note, the performance for the Legg Mason Value Trust since 2005 hasbeen not merely average but abysmal. For the four years from 2006 to 2009, theValue Trust underperformed the S&P 500 three years out of four, and overall fromyear-end 2005 through year-end 2009, it was down 37.5 percent while the S&P 500was down roughly 2.7 percent.19Note that this section is a digression that can be read independently of the rest ofthe chapter.

28 QUANTITATIVE RISK MANAGEMENT

Page 48: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C02 02/14/2012 12:26:21 Page 29

(such as a p of not quite one-half and very short times), a random walkcan provide a rudimentary description of stock market movements.

Let us consider more carefully a simple random walk representing agame of chance in which we win $1 for every heads and lose $1 for everytails. This is a fair game. My intuition about the law of averages wouldlead me to think that because heads and tails each have equal chance, weshould be up about half the time and we should go from being ahead tobeing behind fairly often. This assumption may be true in the long run,but the long run is very deceptive. In fact, ‘‘intuition leads to an errone-ous picture of the probable effects of chance fluctuations.’’20

Let us say we played 10,000 times. Figure 2.4 shows a particularlywell-known example from Feller (1968). In this example, we are

(continued )

A. First 550 Trials

100 200 300 400 500

B. Trial 1–6,000 Compressed

500 1,000 3,0002,000 4,000 6,0005,000

C. Trial 6,000–10,000 Compressed

7,0006,000 8,000 9,000 10,000

FIGURE 2.4 Sample of 10,000 Tosses of an Ideal CoinNote: The compressed scale is 10 times smaller.Source: Based on Feller (1968, fig. 4).

20 Feller (1968, 78). This discussion is taken from the classic text on probability,Feller (1968, sections III.4–III.6).

Risk, Uncertainty, Probability, and Luck 29

Page 49: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C02 02/14/2012 12:26:22 Page 30

(continued )ahead (positive winnings) for roughly the first 120 tosses, and we aresubstantially ahead for a very long period, from about toss 3,000 toabout 6,000. There are only 78 changes of sign (going from win tolose or vice versa), which seems to be a small number but is actuallymore than we should usually expect to see. If we repeated this game(playing 10,000 tosses) many times, then roughly 88 percent of thetime we would see fewer than 78 changes of sign in the cumulativewinnings. This is extraordinary to me.

Even more extraordinary would be if we ran this particular exam-ple of the game in reverse, starting at the end and playing backward.The reverse is also a random walk, but for this particular example, wewould see only eight changes of sign and would be on the negative sidefor 9,930 out of 10,000 steps—on the winning side only 70 steps. Andyet, this outcome is actually fairly likely. The probability is better than10 percent that in 10,000 tosses of a fair coin, we are almost alwayson one side or the other—either winning or losing for more than9,930 out of the 10,000 trials. This result sounds extraordinary, but itis simply another example of how our intuition can mislead. As Fellersays, if these results seem startling, ‘‘this is due to our faulty intuitionand to our having been exposed to too many vague references to amysterious ‘law of averages’’’ (p. 88).

As a practical matter, we must be careful to examine real-worldexamples and compare them with probability theory. In a game ofchance or other events subject to randomness (such as stock markets),a long winning period might lead us to believe we have skill or that theprobability of winning is better than even. Comparison with probabil-ity theory forces us to critically evaluate such assumptions.

THE MONTY HALL PROBLEM

One of the best-known probability puzzles goes under the name of theMonty Hall problem, after the host of the old TV game show Let’sMake a Deal. One segment of the original show involved Monty Hallpresenting a contestant with three doors. Behind one door was a valu-able prize (often a car), and behind the other two were less valuable orworthless prizes (invariably referred to in current presentations asgoats). The contestant chose one door, but before the chosen doorwas opened, Monty Hall would step in and open one of the doors andthen give the contestant the opportunity to either stay with his originalchoice or switch. The probability puzzle is this: Is it better to stay withyour original door or switch?

30 QUANTITATIVE RISK MANAGEMENT

Page 50: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C02 02/14/2012 12:26:22 Page 31

The answer we will eventually come to is that it is better toswitch: The chance of winning is one-third if you stay with the origi-nal door and two-thirds if you switch.

Before delving into the problem more deeply, however, twoparticulars are needed. First, the problem as I have written it is actu-ally not well posed and really cannot be answered properly. Theheart of the problem, as we will see, is exactly what rules MontyHall uses to open the doors: Does he always open a door, no matterwhich door the contestant chooses? Does he always open a door witha goat? The outline of the problem just given is too sloppy in layingout the rules.

Second, this problem has created more controversy and more in-terest both inside and outside the mathematical community than anycomparable brainteaser. The history of the problem is itself interest-ing, but the controversy also serves to highlight some importanttruths:

& Thinking carefully about probability is hard but does have value.By doing so, we can get the right answer when intuition may mis-lead us.

& Assumptions and the framework of the problem are vitally impor-tant. We shall see that the answer for the Monty Hall problemdepends crucially on the details of how the game show is set up.

& When we get an answer that does not make sense, we usually needto go back and refine our thinking about and assumptions behindthe problem. We often find that we did not fully understand howto apply the solution or the implications of some assumption.Ultimately, we end up with deeper insight into the problem and abetter understanding of how to apply the solution in the real world.(This is somewhat along the lines of Lakatos’s [1976] Proofs andRefutations.)

& Related to the preceding point, probability problems and modelsare just representations of the world and it is important to under-stand how well (or how poorly) they reflect the part of the worldwe are trying to understand. The Monty Hall problem demon-strates this point well. In the actual TV show, Monty Hall did notalways act as specified in this idealized problem. Our solutiondoes, however, point us toward what is important—in this case,understanding Monty Hall’s rules for opening the doors.

(continued )

Risk, Uncertainty, Probability, and Luck 31

Page 51: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C02 02/14/2012 12:26:23 Page 32

(continued )

The Monty Hall problem has been around for a considerable time,and its more recent popularity has generated a considerable literature.A recent book by Jason Rosenhouse (2009), on which many points inthis exposition are based, is devoted entirely to Monty Hall.21 The firststatement of the problem, under a different name but mathematicallyequivalent, was apparently made by Martin Gardner (1959) in a Scien-tific American column. That version of the problem, although it gener-ated interest in the mathematical community, did not become famous.

The first appearance of the problem under the rubric of MontyHall and Let’s Make a Deal appears to have been in 1975, in two let-ters published in the American Statistician by Steve Selvin (1975a,1975b). Once again, this presentation of the problem generated inter-est but only within a limited community.

The Monty Hall problem took off with the answer to a question inParademagazine in September 1990 from reader Craig Whitaker to thecolumnist Marilyn vos Savant, author of the magazine’s ‘‘Ask Mari-lyn’’ column. Vos Savant was famous for being listed in the GuinnessBook of World Records (and inducted into the Guinness Hall of Fame)as the person with the world’s highest recorded IQ (228) but is nowbetter known for her (correct) response to theMonty Hall problem.

The question that started the furor was as follows:

Suppose you are on a game show, and you are given thechoice of three doors. Behind one door is a car, behind theothers, goats. You pick a door, say, Number 1, and the host,who knows what is behind the doors, opens another door,say, Number 3, which has a goat. He says to you, ‘‘Do youwant to pick door Number 2?’’ Is it to your advantage toswitch your choice of doors? (vos Savant 1990a, 15)

The reply was:

Yes, you should switch. The first door has a one-third chanceof winning, but the second door has a two-thirds chance.Here’s a good way to visualize what happened. Suppose there

21 The Monty Hall problem is discussed widely—Mlodinow (2008), Gigerenzer(2002), and Aczel (2004), although under a different formulation. Vos Savant(1996) covers the topic in some depth.

32 QUANTITATIVE RISK MANAGEMENT

Page 52: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C02 02/14/2012 12:26:23 Page 33

are a million doors, and you pick door Number 1. Then thehost, who knows what is behind the doors and will alwaysavoid the one with the prize, opens them all except door num-ber 777,777. You would switch to that door pretty fast,wouldn’t you? (vos Savant 1990b, 25)

This simple exchange led to a flood of responses—thousands ofletters from the general public and the halls of academe. Vos Savantwas obliged to follow up with at least two further columns. The re-sponses, many from professional mathematicians and statisticians,were often as rude as they were incorrect (from vos Savant 1996,quoted in Rosenhouse 2009, 24–25):

Since you seem to enjoy coming straight to the point, I will dothe same. In the following question and answer, you blew it!

You blew it, and you blew it big!

May I suggest that you obtain and refer to a standard text-book on probability before you try to answer a question ofthis type again?

You made a mistake, but look at the positive side. If all thosePhDs were wrong, the country would be in some very serioustrouble.

Unfortunately for these correspondents, vos Savant wasabsolutely correct, although possibly less careful than an academicmathematician might have been in stating the assumptions of theproblem. All those PhDs were wrong.

Let me state the problem in a reasonably precise way:

& There are three doors, with a car randomly placed behind onedoor and goats behind the other two.

& Monty Hall, the game show host, knows the placement of the carand the goats; the contestant does not.

& The contestant chooses one door, but that door is not opened.

& Monty Hall then opens a door. He follows these rules in doing so:

& Never open the door the contestant has chosen.

& If the car is behind the contestant’s door (so that the twononchosen doors have goats), randomly choose which goatdoor to open.

(continued )

Risk, Uncertainty, Probability, and Luck 33

Page 53: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C02 02/14/2012 12:26:23 Page 34

(continued )

& If the car is behind one of the two nonchosen doors (so only onenonchosen door has a goat), open that goat door.

& As a result of these rules, Monty Hall will always open a non-chosen door and that door will always show a goat.

& Most importantly, the rules ensure that a goat door is openeddeliberately and systematically, in a decidedly nonrandom way sothat a goat door is always opened and a car door is never opened.

& The contestant is now given the choice of staying with her originaldoor or switching to the remaining closed door.

The natural inclination is to assume that there are now twochoices (the door originally chosen and the remaining unopeneddoor), and with two choices, there is no benefit to switching; it is 50/50 either way. This natural inclination, however, is mistaken. Thechance of winning the car by remaining with the original door is one-third, the chance of winning by switching is two-thirds.

As pointed out earlier, there is a vast literature discussing thisproblem and its solution. I will outline two explanations for why theone-third versus two-thirds answer is correct, but take my word that,given the rules just outlined, it is correct.22

The first way to see that switching provides a two-thirds chance ofwinning is to note that the originally chosen door started with a one-third chance of having the car and the other two doors, together, had atwo-thirds chance of winning. (Remember that the car was randomlyassigned to a door, so any door a contestant might choose has a one-third chance of being the door with the car.) The way that Monty Hallchooses to open a door ensures that he always opens one of the othertwo doors and always chooses a door with a goat. The manner of hischoosing does not alter the one-third probability that the contestantchose the car door originally, nor does it alter the two-thirds probabil-ity that the car is behind one of the other two. By switching, the con-testant can move from one-third to two-thirds probability of winning.(Essentially, in the two-thirds of the cases in which the car is behind

22These arguments are intended to show why the solution is correct, not as a formalproof of the solution. See Rosenhouse (2009) for a proof of the classical problem,together with a large choice of variations.

34 QUANTITATIVE RISK MANAGEMENT

Page 54: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C02 02/14/2012 12:26:23 Page 35

one of the other two doors, Monty Hall reveals which door it is notbehind. Monty Hall’s door opening provides valuable information.)

An alternative approach, and the only one that seems to haveconvinced some very astute mathematicians, is to simulate playing thegame.23 Take the role of the contestant, always pick Door 1, and trythe strategy of sticking with Door 1. (Because the car is randomly as-signed to a door, always picking Door 1 ends up the same as randomlypicking a door.) Use a random number generator to generate a uni-form random variable between 0 and 1 (for example, the RAND()function in Microsoft Excel). If the random number is less than one-third, or 0.3333, then the car is behind Door 1 and you win. Whichother door is opened does not matter. Try a few repeats, and you willsee that you win roughly one-third of the time.

Now change strategies and switch doors. If the random number isless than one-third, or 0.3333, then the car is behind Door 1 and youlose by switching doors. Which other door is opened really does notmatter because both doors have goats and by switching, you lose. Ifthe random number is between 0.3333 and 0.66667, then the car isbehind Door 2; Door 3 must be opened, and you switch to Door 2and win. If the random number is between 0.66667 and 1.0, then thecar is behind Door 3; Door 2 must be opened, and you switch to Door3 and win. Try several repeats. You will soon see that you win two-thirds of the time and lose one-third.

In the end, the strategy of switching wins two-thirds of the timeand the strategy of staying wins only one-third. Although nonintui-tive, this strategy is correct. In the literature, there are many discus-sions of the solution, many that go into detail and present solutionsfrom a variety of perspectives.24

In this problem, the rules for choosing the doors are the criticalcomponent. Consider an alternate rule. Say that Monty Hall does notknow the car location and randomly chooses an unopened door,meaning that he sometimes opens a door with the car and the gameends. In this case, the solution is that if a door with a goat is opened,

(continued )

23Hoffman (1998) relates how Paul Erd€os, one of the most prolific twentieth-century mathematicians, was only convinced of the solution through a Monte Carlosimulation. This is also the method by which I came to understand that switching isthe correct strategy.24Rosenhouse (2009) discusses the problem and solutions in detail. It is also coveredin Mlodinow (2008) and Gigerenzer (2002).

Risk, Uncertainty, Probability, and Luck 35

Page 55: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C02 02/14/2012 12:26:23 Page 36

Pas t / F u t ure Asymmetry

One aspect of uncertainty and randomness that is particularly important iswhat might be called past/future asymmetry. It is often easy to explain thepast but very difficult to predict the future, and events that look preordained

(continued )staying and switching each have a 50/50 chance of winning and thereis no benefit to switching.

In the original game, Monty Hall’s opening a goat door tells younothing about your original door; the rules are designed so thatMonty Hall always opens a goat door, no matter what your originalchoice. Heuristically, the probability of the originally chosen doorbeing a winner does not change; it remains at one-third. (This can beformalized using Bayes’ rule.)

In the alternate game, opening a door does tell you somethingabout your original choice. When Monty Hall opens a door with a car(roughly one-third of the time), you know for sure that your door is aloser. When Monty Hall opens a goat door (two-thirds of the time),you know that now only two choices are left, with your originallychosen door one of those possibilities.

The actual TV show apparently did not abide by either of thesesets of rules but, rather, by a set of rules we might call somewhat ma-levolent.25 If the contestant chose a goat, Monty Hall would usuallyopen the contestant’s door to reveal the goat and end the game. Whenthe contestant chose the car, Monty Hall would open one of the otherdoors to reveal a goat and then try to persuade the contestant toswitch. Under these rules, Monty Hall’s opening one of the otherdoors would be a sure sign that the originally chosen door was a win-ner. In this case, the best strategy would be to stick with the originaldoor whenever Monty Hall opened another door.

For the actual TV game, the standard problem does not apply andthe probability arguments are not relevant. Nonetheless, the analysisof the problem would have been truly valuable to any contestant. Theanalysis highlights the importance of the rules Monty Hall uses forchoosing which door to open. For the actual game, contestants famil-iar with the probability problem could examine past games, determinethe scheme used by Monty Hall to open doors, and substantially im-prove their chance of winning.

25 See Rosenhouse (2009, 20).

36 QUANTITATIVE RISK MANAGEMENT

Page 56: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C02 02/14/2012 12:26:24 Page 37

when viewed in hindsight were often uncertain at the time. Mlodinow(2008) discusses this topic at some length. One nice example he gives inChapter 10 is chess:

Unlike card games, chess involves no explicit random element. Andyet there is uncertainty because neither player knows for sure whathis or her opponent will do next. If the players are expert, at mostpoints in the game it may be possible to see a few moves into thefuture; if you look out any further, the uncertainty will compound,and no one will be able to say with any confidence exactly how thegame will turn out. On the other hand, looking back, it is usuallyeasy to say why each player made the moves he or she made. Thisagain is a probabilistic process whose future is difficult to predictbut whose past is easy to understand. (pp. 197–198)

In Chapter 1 of his book, Mlodinow gives examples of manuscriptsrejected by publishers: John Grisham’s manuscript for A Time to Kill by26 publishers, J. K. Rowling’s first Harry Potter manuscript by 9, andDr. Seuss’s first children’s book by 27. Looking back, it is hard to believethat such hugely popular books could ever have been rejected by even onepublisher, but it is always easier to look back and explain what happenedthan it is to look forward and predict what will happen.

Because we always look back at history and so often it is easy to explainthe past, we can fall into the trap of thinking that the future should beequally easy to explain and understand. It is not, and the chess example is agood reminder of how uncertain the future can be even for a game withwell-defined rules and limited possible moves. We must continually remem-ber that the future is uncertain and all our measurements give us only animperfect view of what might happen and will never eliminate the inherentuncertainty of the future.

Do No t Worry Too Much abou t Human I n t u i t i o n

It is true that thinking about uncertainty is difficult and human intuition isoften poor at solving probability problems. Even so, we should not go toofar worrying about intuition. So what if human intuition is ill suited to situ-ations involving uncertainty? Human intuition is ill suited to situations in-volving quantum mechanics, or special relativity, or even plain old classicalmechanics. That does not stop us from developing DVD players and MRIscanners (which depend on quantum mechanics) and GPS devices (requiringboth special and general relativistic timing corrections) or from calculatingprojectile trajectories (using classical mechanics). None of these are

Risk, Uncertainty, Probability, and Luck 37

Page 57: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C02 02/14/2012 12:26:24 Page 38

intuitive; they require science and mathematics to arrive at correct answers,and nobody is particularly surprised that quantitative analysis is required toinform, guide, and correct intuition.

If we were to conduct experiments asking people about relativisticphysics, nobody would get the right answers. The paradoxes in relativityare legion and, in fact, are widely taught in undergraduate courses in specialrelativity. And quantum mechanics is worse: Einstein never could acceptquantum entanglement and what he called ‘‘spooky action at a distance,’’but it is reality, nonetheless. Lack of intuition does not stop the develop-ment of relativistic physics or quantum mechanics or their practicalapplication.

In the realm of probability, why should anybody be surprised thatquantitative analysis is necessary for understanding and dealing with uncer-tainty? We should be asking how good are the quantitative tools and howuseful is the quantitative analysis, not fret that intuition fails. ‘‘The key tounderstanding randomness and all of mathematics is not being able to intuitthe answer to every problem immediately but merely having the tools tofigure out the answer’’ (Mlodinow 2008, 108).

This discussion is not meant to belittle intuition. Intuition can be valu-able, and not all problems can be solved mathematically. The best sellerBlink, by Gladwell (2005), extols the virtues of intuition26 and is itselfbased in part on research performed by Gigerenzer (2007). My point is thatthe failure of intuition in certain circumstances does not invalidate the use-fulness or importance of formal probabilistic analysis.

S teps t oward Probab i l i s t i c Numeracy

I am not saying that understanding and working with probability is easy.Nor am I saying that risk management is a science comparable to physics;in many ways, it is harder because it deals with the vagaries of human be-havior. But neither should we, as some commentators seem to advocate, justwalk away and ignore the analytical and mathematical tools that can helpus understand randomness and manage risk. Risk management and riskmeasurement are hard, and there are and will continue to be mistakes andmissteps and problems that cannot be solved exactly, or even approxi-mately. But without the mathematics to systematize and organize the prob-lems, the task would be plainly impossible.

26 Gladwell’s book spawned a counterargument (Adler 2009) in which the authormakes the case that first impressions are usually wrong and that one ought to do thehard work of analyzing a situation before making a decision.

38 QUANTITATIVE RISK MANAGEMENT

Page 58: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C02 02/14/2012 12:26:24 Page 39

Gigerenzer (2002), who takes a critical approach to the work ofKahneman and Tversky, has a refreshing approach to the problem of livingwith uncertainty. (Indeed, Gigerenzer [2002] was published outside the UnitedStates under the titleReckoning with Risk: Learning to Live with Uncertainty.)Gigerenzer argues that sound statistical (and probabilistic) thinking can beenhanced, both through training and through appropriate tools and techniques:

Many have argued that sound statistical thinking is not easilyturned into a ‘‘habit of mind.’’ . . . I disagree with this habit-of-mind story. The central lesson of this book is that people’s difficul-ties in thinking about numbers need not be accepted, because theycan be overcome. The difficulties are not simply the mind’s fault.Often, the solution can be found in the mind’s environment, thatis, in the way numerical information is presented. With the aid ofintuitively understandable representations, statistical thinking canbecome a habit of mind. (p. 245)

Gigerenzer (2002, 38) aims to overcome statistical innumeracy throughthree steps:

1. Defeat the illusion of certainty (the human tendency to believe in thecertainty of outcomes or the absence of uncertainty)

2. Learn about actual risks of relevant events and actions3. Communicate risks in an understandable way

These three steps apply equally to risk management. Most work in riskmanagement focuses on the second—learning about risks—but the first andthird are equally important. Thinking about uncertainty is hard, but it isimportant to recognize that things happen and the future is uncertain. Andcommunicating risk is especially important. The risks a firm faces are oftencomplex and yet need to be shared with a wide audience in an efficient, con-cise manner. Effectively communicating these risks is a difficult task thatdeserves far more attention than it is usually given.

2 .4 PROBAB I L I TY AND STAT I ST I CS

Probability is the science of studying uncertainty and systematizing random-ness. Given uncertainty of some form, what should happen, what should wesee? A good example is the analysis of streaks, the chance of a team winninga series of games. This kind of problem is discussed in any basic probabilitytext, and Mlodinow (2008) discusses this type of problem.

Risk, Uncertainty, Probability, and Luck 39

Page 59: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C02 02/14/2012 12:26:24 Page 40

Consider two teams that play a series of three games, with the first teamto win two games being the winner of the series. There are four ways a teamcan win the series and four ways to lose the series, as laid out in the follow-ing table. If the teams are perfectly matched, each has a 50 percent chanceof winning a single game, each individual possibility has a probability ofone-eighth (0.125 ¼ 0.5 � 0.5 � 0.5), and each team has a 50 percentchance of winning the series:

Win Probability Lose Probability

WWL 0.125 LLW 0.125WLW 0.125 LWL 0.125LWW 0.125 WLL 0.125WWW 0.125 LLL 0.125

0.500 0.500

The analysis seems fairly obvious.27 But consider if the teams are notevenly matched and one team has a 40 percent chance of winning and a 60percent chance of losing. What is the probability the inferior team still winsthe series? We can write down all the possibilities as before, but now theprobabilities for outcomes will be different—for example, a WWL for theinferior team will have probability 0.096 (0.4 � 0.4 � 0.6):

Win Probability Lose Probability

WWL 0.096 LLW 0.144WLW 0.096 LWL 0.144LWW 0.096 WLL 0.144WWW 0.064 LLL 0.216

0.352 0.648

It turns out the probability of the inferior team winning the series is35 percent, not a lot less than the chance of winning an individual game.

The problem becomes more interesting when considering longer series.The winner of the World Series in baseball is the winner of four out of sevengames. In baseball, the best team in a league wins roughly 60 percent of its

27 It might seem odd to include the possibilities WWL and WWW separately becausein both cases the final game would not be played. They need to be included, however,because the series sometimes goes to three games (as in WLW). And because the seriessometimes goes to three games, we must keep track of all the possible ways it could goto three games and countWWL andWWW as separate possibilities.

40 QUANTITATIVE RISK MANAGEMENT

Page 60: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C02 02/14/2012 12:26:24 Page 41

games during a season and the worst team wins roughly 40 percent, so pit-ting a 60 percent team against a 40 percent team would be roughly equiva-lent to pitting the top team against the bottom team. What would be thechance that the inferior team would still win the series? We need only writedown all the possible ways as we just did (but now there are 128 possibleoutcomes rather than 8), calculate the probability of each, and sum themup. The result is 29 percent.

To me, a 29 percent chance of such an inferior team winning the seriesis surprisingly high. It is also a good example of how probability theory canhelp guide our intuition. I would have thought, before solving the problem,that the probability would be lower, much lower. The analysis, however,forces me to realize that either my intuition is wrong or that my assump-tions are wrong.28 Probability theory and analysis help us to critically eval-uate our intuition and assumptions and to adjust both so that they moreclosely align with experience and reality.

The analysis of win/lose situations turns out to be quite valuable andapplicable to many problems. It is the same as coin tossing: heads versustails (although not necessarily with a balanced 50/50 coin). It applies to thestreak of the Legg Mason Value Trust Fund. The name given to such a pro-cess with two outcomes, one outcome usually (for convenience) labeledsuccess and the other failure, is a Bernoulli trial. When a Bernoulli trial isrepeated a number of times, the number of successes that occurs is said tohave a binomial distribution.

BERNOUL L I

Bernoulli trials are named after Jakob Bernoulli (1654–1705, alsoknown as Jacob, James, and Jacques). The Bernoulli family was soprolific that it is difficult to keep all the Bernoullis straight. Over thetime from 1650 to 1800, the family produced eight noted mathemati-cians with three (Jakob, brother Johann, and nephew Daniel) amongthe world’s greatest mathematicians.

(continued )

28 It may be that the worst team in the league has a probability lower than 40 percentof winning a single game. Nonetheless, the World Series pits the best teams from theAmerican and National Leagues, and these teams will be more closely matched than60 percent/40 percent. Yet, the analysis shows that there is a reasonable chance (bet-ter than 30 percent) that the better team will lose the World Series.

Risk, Uncertainty, Probability, and Luck 41

Page 61: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C02 02/14/2012 12:26:25 Page 42

Bernoulli trials and the binomial distribution have immediate applica-tion to finance and risk management. We often know (or are told) that thereis only a 1 percent chance of losses worse than some amount Y (say,$100,000) in one day. This is the essence of VaR, as I show in Chapter 5.We can now treat losses for a given day as a Bernoulli trial: 99 percentchance of success, and 1 percent chance of failure (losses worse than$100,000). Over 100 days, this is a sequence of 100 Bernoulli trials, andthe number of successes or failures will have a binomial distribution.

We can use probability theory to assess the chance of seeing one ormore days of large losses. Doing so provides an example of how we mustmove toward embracing randomness and away from thinking there is anycertainty in our world. The number of days worse than $100,000 will havea binomial distribution. Generally, we will not see exactly 1 day out of 100with large losses, even though with a probability of 1 out of 100 we expectto see 1 day out of 100. Over 100 days, there is only a 37 percent chance ofseeing a single day with large losses. There is a 37 percent chance of seeingno losses worse than $100,000, a 19 percent chance of two days, and evenan 8 percent chance of three or more days of large losses.29

The intent of this section is not to cover probability theory in depth but,rather, to explain what it is and show how it can be used. Books such asMlodinow (2008), Gigerenzer (2002), Hacking (2001), Kaplan and Kaplan(2006), and, in particular, Aczel (2004) are very useful. Probability system-atizes how we think about uncertainty and randomness. It tells us what weshould expect to observe given a certain model or form of randomness inthe world—for example, how likely a team is to win a series or how likely it

(continued )The weak law of large numbers originated with Jakob and also

goes by the name of Bernoulli’s Theorem. It was published as the‘‘Golden Theorem’’ in Ars Conjectandi in 1713, after Jakob’s death.The probabilistic Bernoulli’s Theorem should not be confused withthe fluid dynamics Bernoulli’s Theorem, or principle, which origi-nated with nephew Daniel (1700–1782).

29According to the binomial distribution with p ¼ probability of success and q ¼ 1 –p ¼ probability of failure, the probability of k failures out of n trials is

nk

� �

qk 1� qð Þn�k; wherenk

� �

¼ n!

k!ðn� kÞ! is the binomial coefficient. For q ¼0.01, n ¼ 100, P(k ¼ 0) ¼ 0.366, P(k ¼ 1) ¼ 0.370, P(k ¼ 2) ¼ 0.185, P(k � 3)¼ 0.079.

42 QUANTITATIVE RISK MANAGEMENT

Page 62: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C02 02/14/2012 12:26:25 Page 43

is to see multiple bad trading days in a set of 100 days. Building probabilis-tic intuition is valuable; I would even say necessary, for any success in man-aging risk.

S t a t i s t i c s

Probability theory starts with a model of randomness and from there devel-ops statements about what we are likely to observe. Statistics, roughlyspeaking, works in the opposite direction. We use what we observe in na-ture to develop statements about the underlying probability model. Forexample, probability theory might start with knowing that there is a 1 per-cent chance of a day with losses worse than $100,000 and then tell us thechance that, in a string of 100 days, we will observe exactly one or exactlytwo or exactly three such days. Statistics starts with the actual losses that weobserve over a string of 100 days and attempts to estimate the underlyingprocess: Is the probability of a loss worse than $100,000 equal to 1 percentor 2 percent? Statistics also provides us with estimates of confidence aboutthe probabilities so that we can know, for example, whether we shouldstrongly believe that it is a 1 percent probability or (alternately) whether weshould only feel confident that it is somewhere between 0.5 percent and1.5 percent.

For the technical side of risk measurement, statistics is equally or moreimportant than probability. For the application of risk management, for ac-tually managing risk, however, probability is more important. A firm under-standing of how randomness may affect future outcomes is critical, even ifthe estimation of the underlying model has to be left to others. Without anappreciation of how randomness governs our world, understanding risk isimpossible.

Theor i e s o f Probab i l i t y : F r equency versus Be l i e f( Ob j ec t i v e versus Sub j e c t i v e )

There are deep philosophical questions concerning the foundations of prob-ability, with two theories that are somewhat at odds. These theories oftengo under the name of objective probability versus subjective probabilityor by the words risk versus uncertainty, although better names (used byHacking 2001) are frequency-type versus belief-type probability. Fortu-nately, we can safely sidestep much of the debate over the alternateapproaches and, for most practical purposes, use the two interchangeably.Nonetheless, the distinction is relevant, and I will discuss the issues herebefore turning back to more strictly risk management issues.

Risk, Uncertainty, Probability, and Luck 43

Page 63: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C02 02/14/2012 12:26:25 Page 44

The objective, or frequency-type, theory of probability is the easiest tounderstand and is tied to the origins of probability theory in the seventeenthcentury. Probability theory started with games of chance and gambling, andthe idea of frequency-type probability is best demonstrated in this context.Consider an ideal coin, with a 50 percent chance of heads versus tails. Eachflip of the coin is a Bernoulli trial, and we know that the probability of aheads is 50 percent. How do we know? It is an objective fact—one that wecan measure by inspecting the coin or even better by counting the frequencyof heads versus tails over a large number of trials. (The words objective andfrequency are applied to this probability approach exactly because thisprobability approach measures objective facts and can be observed by thefrequency of repeated trials.)

Repeated throws of a coin form the archetypal frequency-type proba-bility system. Each throw of the coin is the same as any other, each is inde-pendent of all the others, and the throw can be repeated as often and as longas we wish.30 Frequency-type probability reflects how the world is (to useHacking’s phrase). It makes statements that are either true or false: A faircoin either has a one-half probability of landing heads on each throw or itdoes not; it is a statement about how the world actually is.

For frequency-type probability, laws of large numbers and central limittheorems are fundamental tools. Laws of large numbers tell us that as werepeat trials (flips of the coin), the relative frequency of heads will settledown to the objective probability set by the probabilistic system we areusing, one-half for a fair coin. Not only that, but laws of large numbers andcentral limit theorems tell us how fast and with what range of uncertaintythe frequency settles down to its correct value. These tools are incrediblypowerful. For example, we can use the usual central limit theorem to saythat in a coin-tossing experiment with 100 flips, we have a high probabilitythat we will observe between 40 and 60 heads (and a low probability thatwe will observe outside that band).31

Frequency-type probability is ideally suited to games of chance, inwhich the game is repeated always under the same rules. Much of the worldof finance fits reasonably well into such a paradigm. Trading in IBM stock is

30 A die would be another simple and common example of a system to whichfrequency-type probability would naturally apply. An ideal die would have a one-sixth chance of landing with any particular face up. For an actual die, we couldexamine the die itself and verify its symmetry, and we could also perform repeatedthrows to actually measure the frequency for each of the six faces.31 The number of heads will be approximately normally distributed, N(m ¼ 50,s2 ¼ 25), so that there will be a 95 percent probability the actual number of headswill be within m � 2s or 50 � 10.

44 QUANTITATIVE RISK MANAGEMENT

Page 64: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C02 02/14/2012 12:26:25 Page 45

likely to look tomorrow like it does today—not in regard to the stock goingup by the exact amount it did yesterday but, rather, in the likelihood that itwill go up or down and by how much. New information might come outabout IBM, but news about IBM often comes out, which is part of the re-peated world of trading stocks. Whether IBM goes up or down is, in effect,as random as the flip of a coin (although possibly a biased coin becausestocks generally grow over time). For many practical purposes, the cointhat is flipped today can be considered the same as the coin flipped yester-day: We do not know whether IBM will go up or down tomorrow, but weusually do not have any particular reason to think it more likely to go uptomorrow than it has, on average, in the past.

For many problems, however, a frequency-type approach to probabilityjust does not work. Consider the weather tomorrow. What does it mean tosay the probability of precipitation tomorrow is 30 percent? This is not atrue or false statement about how the world is. Viewed from today, tomor-row is a one-time event. Saying the probability is 30 percent is a statementabout our confidence in the outcome or about the credibility of the evidencewe use to predict that it will rain tomorrow. We cannot consider frequen-cies because we cannot repeat tomorrow. What about the probability thatan asteroid impact led to the extinction of the dinosaurs? Or the probabilitythat temperatures will rise over the next century (climate change)? None ofthese are repeatable events to which we can apply frequency concepts or thelaw of large numbers. Yet we need to apply, commonly do apply, and in-deed can sensibly apply probabilistic thinking to these areas.

For these kinds of one-off or unique or nonfrequency situations, we relyon belief-type probabilities, what are often termed subjective probabilit-ies.32 Belief-type probabilities must follow the same rules as frequency-typeprobabilities but arise from a very different source.

The probability of one-off events, or more precisely, our assessment orbeliefs about the probabilities, can be uncovered using a neat trick devel-oped by Bruno de Finetti (1906–1985), an Italian mathematician and co-developer of mean-variance optimization.33 The de Finetti game is a

32 The word subjective is unfortunate. It suggests that this type of probability issomehow inferior to the frequency-type or objective probability. Furthermore,belief-type probability statements can be based on logical relations and evidencethat can reasonably be labeled objective; an example is a forecast of rain tomorrowbased on the observations that a storm system lies to the west and that weather in themiddle northern latitudes usually moves from west to east. Like Hacking (2001),I will generally not use the words objective and subjective probability but ratherfrequency-type and belief-type probability.33 See Markowitz (2006). See also Bernstein (2007, 108).

Risk, Uncertainty, Probability, and Luck 45

Page 65: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C02 02/14/2012 12:26:25 Page 46

thought experiment, a hypothetical lottery or gamble in which an event iscompared with drawing balls from a bag.

Say the event we are considering is receiving a perfect score on an exam;a friend took an exam and claims she is absolutely, 100 percent sure she gota perfect score on the exam (and she will receive the score tomorrow).34 Wemight be suspicious because, as Ben Franklin so famously said, ‘‘Nothingcan be said to be certain, except death and taxes,’’ and exam grades in par-ticular are notoriously hard to predict.

We could ask our friend to choose between two no-lose gambles: Thefirst is to receive $10 tomorrow if our friend’s test is a perfect score, and thesecond is to receive $10 if our friend picks a red ball from a bag filled with100 balls. The bag is filled with 99 red balls and only one black ball so thatthere is a 99 percent chance our friend would pick a red ball from the bag.Most people would presumably draw from the bag rather than wait for theexam score. It is almost a sure thing to win the $10 by drawing from thebag, and our friend, being reasonable, probably does not assign a higherthan 99 percent chance of receiving a perfect score.

Assuming our friend chooses to draw a ball from the bag with 99 redballs, we can then pose another choice between no-lose gambles: $10 if thetest score is perfect versus $10 if a red ball is drawn from a bag—this onefilled with 80 red and 20 black balls. If our friend chooses the test score, weknow the subjective probability is between 99 percent and 80 percent. Wecan further refine the bounds by posing the choice between $10 for a perfecttest score versus $10 for a red ball from a bag with 90 red and 10 black.Depending on the answer, the probability is between 99 percent and 90 per-cent or 90 percent and 80 percent.

Such a scheme can be used to uncover our own subjective probabilities.Even using the scheme purely as a thought experiment can be extremely in-structive. Aczel (2004, 23) points out that people often restate their proba-bilities when playing this game; it forces us to think more carefully aboutour subjective probabilities and to make them consistent with assessmentsof other events. Aczel also points out that, interestingly, weather forecastersdo not tend to change their assessments very much; their profession presum-ably forces them to think carefully about belief-type or subjectiveprobabilities.

Note that the theory of belief-type probability includes more than justpersonal degrees of belief. Logical probability (that is, statements about theprobability of events conditional on evidence or logical relations) is anotherform of belief-type probability. An example of a logical probability state-ment would be the following (taken from Hacking 2001, 142): ‘‘Relative to

34This example is modified from the nice explanation in Aczel (2004, 21–24).

46 QUANTITATIVE RISK MANAGEMENT

Page 66: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C02 02/14/2012 12:26:25 Page 47

recent evidence about a layer of iridium deposits . . . the probability is 90percent that the reign of the dinosaurs was brought to an end when a giantasteroid hit the Earth.’’ This is a statement about the probability of someevent conditional on evidence. It is intended to express a logical relationshipbetween some hypothesis (here the extinction of dinosaurs) and relevantevidence (here the presence of iridium in asteroids and the distribution ofiridium in geological deposits around the globe). In the theory of logicalprobability, any probability statement is always relative to evidence.

The good news in all this is that the laws of probability that we apply tofrequency-type (objective) probability carry over to these belief-type (sub-jective) probability situations. Laws concerning independence of events,unions of events, conditional probability, and so on, all apply equally tofrequency-type and belief-type probability. In fact, for most practical pur-poses, in our daily lives and in risk management applications, we do notneed to make any definite distinction between the two; we can think ofprobability and leave it at that.

THE H I STORY OF THEOR I ES OF PROBAB I L I TY

The history of the philosophical debate on the foundations of proba-bility is long. The distinction between objective and subjective proba-bility is often ascribed to Knight (1921), but LeRoy and Singell (1987)argue that it more properly belongs to Keynes (1921). (LeRoy andSingell argue that Knight is open to various interpretations but that hedrew a distinction between insurable risks and uninsurable uncer-tainty in which markets collapse because of moral hazard or adverseselection, rather than between objective risks and subjective uncer-tainties or the applicability or nonapplicability of the probability cal-culus. They state that ‘‘Keynes [1921] explicitly set out exactly thedistinction commonly attributed to Knight’’ [p. 395].)

Frequency-Type Probability. John Venn (1834–1923), the inven-tor of Venn diagrams, developed one of the first clear statements oflimiting frequency theories about probability. Richard von Mises(1883–1953), an Austrian-born applied mathematician, philosopher,and Harvard professor, systematically developed frequency ideas, andA. N. Kolmogorov (1903–1987) published definitive axioms of prob-ability in 1933 and developed fundamental ideas of computationalcomplexity. Karl Popper (1902–1994), an Austrian-born philosopher

(continued )

Risk, Uncertainty, Probability, and Luck 47

Page 67: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C02 02/14/2012 12:26:26 Page 48

Bayes ’ T heorem and Be l i e f - T ype Probab i l i t y

One important divergence between the frequency-type and belief-type prob-ability approaches is in the central role played by the law of large numbersversus Bayes’ rule. The law of large numbers tells us about how relative fre-quencies and other observed characteristics stabilize with repeated trials. Itis central to understanding and using frequency-type probability.

Bayes’ rule (or Theorem), in contrast, is central to belief-type probabil-ity—so central, in fact, that belief-type probability or statistics is sometimescalled Bayesian probability or statistics. Bayes’ rule is very simple in con-cept; it tells us how to update our probabilities, given some new piece ofinformation. Bayes’ rule, however, is a rich source of mistaken probabilisticthinking and confusion. The problems that Bayes’ rule applies to seem to besome of the most counterintuitive.

(continued )and professor at the London School of Economics, developed the pro-pensity approach to frequency-type probability.

Belief-Type Probability. John Maynard Keynes (1883–1946), inA Treatise on Probability (1921), provided the first systematic presen-tation of logical probability. Frank Plumpton Ramsey (1903–1930)and Bruno de Finetti (1906–1985) independently invented the theoryof personal probability, but its success is primarily attributed to Leo-nard J. Savage (1917–1971), who made clear the importance of theconcept, as well as the importance of Bayes’ rule. De Finetti (and Sav-age) thought that only personal belief–type probability made sense,whereas Ramsey saw room for a frequency-type concept, especially inquantum mechanics.

There has been, and continues to be, considerable debate over thevarious theories of probability. To gain an inkling of the potential ferocityof the debate, keep in mind the comment of John Venn, an early devel-oper of the frequency theory, regarding the fact that in the logical theoryof probability, a probability is always relative to evidence: ‘‘The probabil-ity of an event is nomore relative to something else than the area of a fieldis relative to something else’’ (quoted in Hacking 2001, 143).

A valuable and straightforward exposition of the foundations ofmodern probability theory is given by the philosopher Ian Hacking(2001). And Hacking (1990, 2006) provides a nice history ofprobability.

48 QUANTITATIVE RISK MANAGEMENT

Page 68: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C02 02/14/2012 12:26:26 Page 49

A classic example of the application of Bayes’ rule is the case of testingfor a disease or condition, such as HIV or breast cancer, with a good, butnot perfect, test.35 Consider breast cancer, which is relatively rare in thegeneral population (say, 5 in 1,000). Thus, the prior probability that awoman has breast cancer, given no symptoms and no family history, is onlyabout 0.5 percent. Now consider the woman undergoing a mammogram,which is roughly 95 percent accurate (in the sense that the test falsely re-ports a positive result about 5 percent of the time). What is the chance thatif a patient has a positive mammogram result, she actually has breast can-cer? The temptation is to say 95 percent because the test is 95 percent accu-rate, but that answer ignores the fact that the prior probability is so low,only 0.5 percent. Bayes’ rule tells us how to appropriately combine the prior0.5 percent probability with the 95 percent accuracy of the test.

Before turning to the formalism of Bayes’ rule, let us reason out the an-swer, using what Gigerenzer (2002) calls ‘‘natural frequencies.’’ Considerthat out of a pool of 1,000 test takers, roughly 5 (5 in 1,000) will actuallyhave cancer and roughly 50 will receive false positives (5 percent false-positive rate, 5 in 100, or 50 in 1,000). That is, there will be roughly 55 posi-tive test results, but only 5 will be true positives. This means the probabilityof truly having cancer, given a positive test result, is roughly 5 in 55 or 9 per-cent, not 95 in 100, or 95 percent. This result always surprises me, althoughwhen explained in this way, it becomes obvious.36

The formalism of Bayes’ rule shows how the conditional probability ofone event (in this case, the conditional probability of cancer, given a posi-tive test) can be found from its inverse (in this case, the conditional proba-bility of a positive test, given no cancer, or the false-positive rate).

Say we have two hypotheses—HY: cancer yes and HN: cancer no. Wehave a prior (unconditional) probability of each hypothesis:

P HYð Þ ¼ 0:005

35Discussed in Aczel (2004, ch. 16), Gigerenzer (2002, ch. 4), and Mlodinow (2008,ch. 104). See also Hacking (2001, ch. 7).36Gigerenzer (2002) stresses the usefulness of formulating applications of Bayes’rule and conditional probability problems in such a manner. He argues that just asour color constancy system can be fooled by artificial lighting (so that his yellow-green Renault appears blue under artificial sodium lights), our probabilistic intuitioncan be fooled when presented with problems in a form that our intuition has notbeen adapted or trained to handle. Gigerenzer’s solution is to reformulate problemsin natural frequencies rather than bemoan the inadequacy of human intuition. Thisis an example of how proper presentation and communication of a risk problem canclarify rather than obfuscate the issues.

Risk, Uncertainty, Probability, and Luck 49

Page 69: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C02 02/14/2012 12:26:26 Page 50

and

P HNð Þ ¼ 0:995

We also have a new piece of evidence or information—EY: evidence ortest result yes (positive) or EN: evidence or test result no (negative). The testis not perfect, so there is a 95 percent chance the test will be negative withno cancer and a 5 percent chance it will be positive with no cancer:

P EYjHNð Þ ¼ 0:05

and

P ENjHNð Þ ¼ 0:95

For simplicity, let us assume that the test is perfect if there is cancer(there are no false negatives):

P EYjHYð Þ ¼ 1:00

and

P ENjHYð Þ ¼ 0:00

Now, what is the probability that there is actually cancer, given a posi-tive test (hypothesis yes, given evidence yes)—that is, what is

P HYjEYð ÞBayes’ rule says that

P HYjEYð Þ ¼ P EYjHYð Þ � P HYð ÞP EYjHYð Þ � P HYð Þ þ P EYjHNð Þ � P HNð Þ ð2:1Þ

This can be easily derived from the rules of conditional probability (seeHacking 2001, ch. 7), but we will simply take it as a rule for incorporatingnew evidence (the fact of a positive test result) to update our prior probabil-ities for the hypothesis of having cancer—that is, a rule on how to use EY togo from P(HY) to P(HYjEY). Plugging in the probabilities just given, we get

P HYjEYð Þ ¼ 1:00� 0:005

1:00� 0:005þ 0:05� 0:995

¼ 0:0913¼ 9:13%

Bayes’ rule has applications throughout our everyday lives as well as inrisk management. The breast cancer example shows how important it is touse the updated probability—P(HYjEY) ¼ 9 percent—rather than what ourintuition initially gravitates toward—the test accuracy, 1 – P(EYjHN) ¼ 95

50 QUANTITATIVE RISK MANAGEMENT

Page 70: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C02 02/14/2012 12:26:26 Page 51

percent. Failure to apply Bayes’ rule is common and leads to harrowingencounters with doctors and severe miscarriages of justice. Mlodinow(2008) relates his personal experience of being told he was infected withHIV with 999 in 1,000 or 99.9 percent certainty. In reality, an appropriateapplication of Bayes’ Theorem to his positive test results in a probability ofabout 1 in 11 or 9.1 percent. (He did not have HIV.)37 In legal circles, themistake of using 1 – P(EYjHN) when P(HYjEY) should be used is called the‘‘prosecutor’s fallacy.’’ Mlodinow (2008) discusses the cases of Sally Clarkand O. J. Simpson. Gigerenzer has carried out research in this arena, andGigerenzer (2002) devotes considerable attention to the issue: Chapter 8to the O. J. Simpson trial and Chapter 9 to a celebrated California case,People v. Collins, among others.

Bayes’ rule is central to belief-type probability because it tells us how toconsistently use new evidence to update our prior probabilities. Bayesianprobability theory is sometimes misunderstood, or caricatured, as a vacuousapproach that can be used to arrive at whatever result the speaker desires.If the prior probability is silly (say, a prior probability of 1.0 that the equityrisk premium is negative), then the resulting posterior will also besilly. Bayes’ rule provides a standard set of procedures and formulas forusing new evidence in a logical and consistent manner and, as such, isincredibly useful and powerful. Bayes’ rule, however, does not excuse usfrom the hard task of thinking carefully and deeply about the original(prior) probabilities.

37 To apply Bayes’ rule using Gigerenzer’s idea of natural frequencies, we need toknow that the prior probability of someone like Mlodinow having HIV is about 1 in10,000 and that the test’s false-positive rate is about 1 in 1,000 (or, its accuracy is99.9 percent). So for a population of 10,000 test-takers, there would be 1 true posi-tive and roughly 10 false positives, for a total of 11 positive tests. In other words, theprobability of having HIV given a positive test would be about 1 in 11, or 9.1 per-cent. Using the formalism of Bayes’ rule, we have P(HY) ¼ 0.0001, P(EYjHN) ¼0.001, and let us assume P(EYjHY) ¼ 1.00. Then, P(HYjEY) ¼ (1.00 � 0.0001)/(1.00 � 0.0001 þ 0.001 � 0.9999) ¼ 0.091 ¼ 9.1 percent. For the record,Mlodinow’s test was a false positive and he was not infected. Also, note that theapplication of Bayes’ rule is very dependent on the assumption that Mlodinow is atlow risk of HIV infection. For an individual at high risk (say, with a prior probabilityof 1 percent rather than 0.01 percent), we would get: P(HYjEY) ¼ (1.00 � 0.01)/(1.00 � 0.01 þ 0.001 � 0.99) ¼ 0.910 ¼ 91 percent. Bayes’ rule tells us how toupdate the prior probabilities in the presence of new evidence; it does not tell uswhat the prior probabilities are.

Risk, Uncertainty, Probability, and Luck 51

Page 71: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C02 02/14/2012 12:26:27 Page 52

Us i ng Frequency - Type and Be l i e f -Type Probab i l i t i e s

I have spent time explaining the distinction between frequency-type andbelief-type probability for one important reason. Financial risk often com-bines both frequency-type and belief-type probabilities. For one thing, in thereal world the future will never be the same as the past; it may be differentnot just in the particulars but in the distribution of outcomes itself. Therewill always be totally new and unexpected events; a new product may beintroduced, new competitors may enter our business, new regulations maychange the landscape.

There is another important reason why we need to consider bothfrequency-type and belief-type probabilities: Single events always involvebelief-type probability. What is the chance that losses tomorrow will be lessthan $50,000? That is a question about a single event and as such is a ques-tion about belief-type and not frequency-type probability. Probability state-ments about single events are, inherently, belief type. We may, however,base the belief-type probability on frequency-type probability.

Hacking (2001, 137) discusses the frequency principle, a rule of thumbthat governs when and how we switch between frequency-type and

THOMAS BAYES (1702–1761 )

Thomas Bayes was a Presbyterian minister at Mount Sion, TunbridgeWells, England. Bayes’ considerable contribution to the theory ofprobability rests entirely on a single paper, which he never published.Bayes left the paper to fellow minister Richard Price (a mathematicianin his own right and credited with founding the field of actuarial sci-ence), who presented it to the Royal Society on December 23, 1763.The paper apparently aroused little interest at the time, and fullappreciation was left to Pierre-Simon Laplace (1749–1827). Yet, ithas had a fundamental, lasting, and continuing influence on the devel-opment of probability and statistics, although it has often been con-sidered controversial. ‘‘It is hard to think of a single paper thatcontains such important, original ideas as does Bayes’. His theoremmust stand with Einstein’s E ¼mc2 as one of the great, simple truths’’(D. V. Lindley 1987, in Eatwell, Milgate, and Newman 1987, TheNew Palgrave, vol. 1, 208).

52 QUANTITATIVE RISK MANAGEMENT

Page 72: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C02 02/14/2012 12:26:27 Page 53

belief-type probability. He discusses the following example: A fair coin istossed, but before we can see the result, the coin is covered. What is theprobability that this particular coin toss is heads? This is a single event. Wecannot repeat this particular experiment. And yet, it is clear that we should,rationally and objectively, say that the probability is one-half. We know thefrequency-type probability for a fair coin turning up heads is one-half, andbecause we know nothing else about this single trial, we should use thisfrequency-type probability. The frequency principle is just this: When weknow the frequency-type probability and nothing else about the outcome ofa single trial, we should use the frequency-type probability.

Something like the frequency principle generally holds. The world isnot a repeated game of chance to which fixed rules apply, and so we mustalways apply some component of subjective or belief-type probability toour management of risk. Aczel (2004) summarizes the situation nicely(emphasis in the original):

When an objective [frequency-type] probability can be determined,it should be used. (No one would want to use a subjective pro-bability to guess what side a die will land on, for example.) In othersituations, we do our best to assess our subjective [belief-type]probability of the outcome of an event. (p. 24)

BAYES ’ TH EOREM , STREAKS , ANDFUND PERFORMANCE

We can use Bayes’ Theorem to help improve our understanding offund performance and streaks, such as the streak experienced by theLegg Mason Value Trust Fund discussed earlier. Remember thatthrough 2005, the Value Trust Fund had outperformed the S&P 500for 15 years straight. And remember that for a single fund having noexceptional skill (that is, with a 50/50 chance of beating the index inany year), the probability of such a streak is very small: (1/2),15 or0.000031 or 0.0031 percent. For a collection of 1,000 funds, how-ever, the probability that one or more funds would have such a streakis 3 percent. The probability of having one or more such funds duringa 40-year period out of a pool of 1,000 is about 32.8 percent.

Now let us turn the question around and consider what such astreak, when it occurs, tells us about funds in general and the Value

(continued )

Risk, Uncertainty, Probability, and Luck 53

Page 73: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C02 02/14/2012 12:26:27 Page 54

(continued )Trust Fund in particular. Roughly speaking, our earlier applicationwas probabilistic, using probability theory to say something aboutwhat we should observe. Our current application is more statistical,using data to make inferences about our underlying model.

Let us start with a simplistic hypothesis or model of the world, amodel in which some managers have exceptional skill. Specifically, letus take the hypothesis HY to be that out of every 20 funds, one fundbeats the index 60 percent of the time. In other words, there is a smallproportion (5 percent) of ‘‘60 percent skilled’’ funds with the other19 out of 20 (95 percent of funds) being ‘‘49.47 percent skilled.’’On average, funds have a 50 percent chance of beating the index.Of course, there is no certainty in the world, and it would be foolishto assume that exceptional skill exists with probability 1.00—that is,to assume P(HY) ¼ 1.00. We must consider the alternative hypothe-sis, HN, that there is no special skill, and each and every fund has a50/50 chance of beating the market in any one year.

In this case, the evidence is observing a streak for some fundamong all funds (say, for argument’s sake, the pool is 1,000 funds),with EY the evidence of yes observing a 15-year streak in 40 yearsand EN the evidence of not observing a 15-year streak. Now we canask, what does this evidence, observing a streak, tell us about theprobability of HY (the world has exceptional managers) versus HN(no managers have exceptional skill)?

We start by calculating the probability of observing a streak ina world with exceptional skill versus no exceptional skill:38

P EYjHYð Þ ¼ PðYes streak for some fundj5% of funds are 60% skilled;95% are 49:47% skilledÞ

¼ 1� 1� 0:000588ð Þ1;000¼ 0:4447) P ENjHYð Þ ¼ 1� 0:4447¼ 0:5553

38By simulation, the probability that a single 60 percent skilled fund has a 15-yearstreak in 40 years is 0.005143, versus 0.000348 for a 49.47 percent skilled fund.Thus, P(15-yr run in 40 yrsjHY) ¼ 0.05 � P(15-yr runj0.6 manager] þ 0.95 � P(15-yr runj0.4947 manager) ¼ 0.05 � 0.005143 þ 0.95 � 0.000348 ¼ 0.000588.

54 QUANTITATIVE RISK MANAGEMENT

Page 74: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C02 02/14/2012 12:26:27 Page 55

P EYjHNð Þ ¼ P Yes streak for some fundjAll funds 50% skilledð Þ¼ 1� 1� 0:000397ð Þ1;000¼ 0:3277) P ENjHNð Þ ¼ 1� 0:3277¼ 0:6723

Now we can ask, what is P(HYjEY)? That is, what is the proba-bility of the world having skilled managers, given that we observe atleast one fund with a streak of 15 years? Bayes’ rule (Equation 2.1)says that

P HYjEYð Þ ¼ P EYjHYð Þ � P HYð ÞP EYjHYð Þ � P HYð Þ þ P EYjHNð Þ � P HNð Þ

¼ 0:4447� P HYð Þ0:4447� P HYð Þ þ 0:3277� P HNð Þ

There are two important lessons to take from this equation. First,Bayes’ rule itself tells us nothing about what the prior probabilitiesshould be (although Bayes’ original paper tried to address this issue).We may start being highly confident that exceptional skill exists [sayP(HY) ¼ 0.90] or very skeptical [P(HY) ¼ 0.10]. We are taking theprobability P(HY) as pure belief-type probability: we must use experi-ence or judgment to arrive at it, but it is not based on hard, frequency-type evidence. The second lesson is that Bayes’ rule tells us how toapply evidence to our belief-type probabilities to consistently updatethose probabilities in concert with evidence. In fact, when we applyenough and strong-enough evidence, we will find that divergent priorbelief-type probabilities [P(HY) and P(HN)] will converge to the sameposterior probabilities [P(HYjEY) and P(HNjEY)].

We can examine exactly how much the probabilities will changewith the evidence of a streak. Let us say that I am skeptical that theworld has managers with superior skill; my prior belief-type probabil-ity for HY, the hypothesis that there are funds with superior skill (60percent skilled funds), is

PðHY ¼ 5% of managers have superior skill and can beat the

index better than 50=50Þ ¼ 0:10

Then, applying Bayes’ rule (Equation 2.1) gives

P HYjEYð Þ ¼Pð5% of managers have skill given there is at least one

15-year streakÞ ¼ 0:13

(continued )

Risk, Uncertainty, Probability, and Luck 55

Page 75: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C02 02/14/2012 12:26:28 Page 56

(continued )In other words, the evidence of a streak alters my initial (low)

probability but not by very much.Now consider the other extreme, where I strongly believe there

are managers with superior skill so that my prior is P(HY) ¼ 0.90.Then applying Bayes’ rule gives P(HYjEY) ¼ 0.92, and again my ini-tial assessment is not altered very much. In sum, the evidence of a 15-year streak is not strong evidence in favor of superior manager skill.The streak does not prove (but neither does it disprove) the hypothesisthat superior skill exists.

Let us now ask a subtly different question: Say we knew or wereconvinced for some reason that the world contained some managerswith superior skill (we take as a given the hypothesis that 5 percent ofthe managers are 60 percent skilled funds). Now, what does a 15-yearstreak for a particular fund tell us about that fund? How does thatchange our assessment of whether that fund is a 60 percent skilledfund versus a 49.47 percent skilled fund?

In this case, the hypothesis HY is that a particular fund is 60 per-cent skilled and the evidence is a 15-year streak out of 40 years:

P EYjHYð Þ ¼ P Yes streak for one fundjThis fund is 60% skilledð Þ¼ 0:005143) ENjHYð Þ ¼ 1� 0:005143¼ 0:99486

P EYjHNð Þ ¼ P Yes streak for one fundjThis fund is 49:47% skilledð Þ¼ 0:00035) ENjHYð Þ ¼ 1� 0:00035¼ 0:99965

Now we can ask, what is P(HYjEY)? That is, what is the proba-bility that this manager is 60 percent skilled, given that this fund has astreak of at least 15 years? Bayes’ rule says that

P HYjEYð Þ ¼ P EYjHYð Þ � P HYð ÞP EYjHYð Þ � P HYð Þ þ P EYjHNð Þ � P HNð Þ

¼ 0:005143� 0:05

0:005143� 0:05þ 0:00035� 0:95¼ 0:436

In other words, the evidence that this fund has a 15-year streakchanges our probability that this particular fund is a skilled fund from

56 QUANTITATIVE RISK MANAGEMENT

Page 76: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C02 02/14/2012 12:26:28 Page 57

P(HY) ¼ 0.05 to P(HYjEY) ¼ 0.436. (This result is conditional on theworld containing a 5 percent smattering of skilled funds among thelarge pool of all funds.) We could view this either as a big change (from5 percent probability to 43.6 percent probability) or as further indica-tion that a 15-year streak is weak evidence of skill because we still haveless than a 50/50 chance that this particular manager is skilled.

The Legg Mason Value Trust Fund outperformed for the 15 yearsup to 2005, but performance during the following years definitivelybroke the streak; the fund underperformed the S&P 500 for 3 out ofthe 4 years subsequent to 2005.39 We can use Bayes’ Theorem toexamine how much this evidence would change our probability thatthe fund is 60 percent skilled. The hypothesis HY is still that the fundis 60 percent skilled, but now P(HY) ¼ 0.436 and

P EYjHYð Þ ¼ P Fund underperforms 3 out of 4 years j This fund is 60% skilledð Þ¼ P Binomial variable fails 3 out of 4 trials j Prob of success ¼ 0:6ð Þ¼ 0:1536

P EYjHNð Þ ¼ P Fund underperforms 3 out of 4 years jThis fund is 49:47% skilledð Þ¼ P Binomial variable fails 3 out of 4 trials j Prob of sucess ¼ 0:4974ð Þ¼ 0:2553

Bayes’ Theorem gives

P HYjEYð Þ ¼ P EYjHYð Þ � P HYð ÞP EYjHYð Þ � P HYð Þ þ P EYjHNð Þ � P HNð Þ

¼ 0:1536� 0:436

0:1536� 0:436þ 0:2553� 0:564

¼ 0:317

This evidence drops the probability that the Value Trust Fund isskilled, but not as much as I would have thought.

In conclusion, this example shows how we can use probabilitytheory and Bayes’ Theorem to organize our belief-type probabilitiesand combine them with evidence and experience. It also shows howimportant it is to systematize and organize our probabilistic thinking.

(continued )

39As noted in an earlier footnote, for the four years from 2006 to 2009, the ValueTrust underperformed the S&P 500 for 2006, 2007, and 2008.

Risk, Uncertainty, Probability, and Luck 57

Page 77: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C02 02/14/2012 12:26:28 Page 58

R i sk versus Uncer t a i n t y or Amb i gu i t y

The good news is that the rules of probability that apply to frequency-typeprobability apply equally to belief-type probability. We can use the two in-terchangeably in calculations and for many purposes can ignore any distinc-tion between them.

Although I argue that we can often ignore any distinction betweenfrequency-type (objective) and belief-type (subjective) probability, manywriters argue otherwise. This distinction is usually phrased by contrastingrisk (roughly corresponding to frequency-type probability) to uncertaintyor ambiguity (where numerical probabilities cannot be assigned, usuallycorresponding to some form of belief-type or subjective probability). Oneexpression of this view is Lowenstein (2000):

Unlike dice, markets are subject not merely to risk, an arithmeticconcept, but also to the broader uncertainty that shadows the fu-ture generally. Unfortunately, uncertainty, as opposed to risk, is anindefinite condition, one that does not conform to numerical strait-jackets. (p. 235)

Lowenstein is a popular author and not a probabilist or statistician, butthe same view is held by many who think carefully and deeply about suchissues. For example, Gigerenzer (2002) states it as follows:

In this book, I call an uncertainty a risk when it can be expressed asa number such as a probability or frequency on the basis of empiri-cal data. . . . In situations in which a lack of empirical evidencemakes it impossible or undesirable to assign numbers to the possi-ble alternative outcomes, I use the term ‘‘uncertainty’’ instead of‘‘risk.’’ (p. 26)

(continued )A 15-year streak sounds quite impressive, but upon closer examina-tion, we see that it is not as unusual as we might have thought.40

40 I am not arguing here against the existence of special skill as much as I am arguingin favor of a critical approach to the data. Focusing only on Legg Mason Value Trustignores the fact that there were many other winning funds with track records thatwere not quite as good. Their existence would (I think, greatly) raise the likelihoodthat funds with superior skill, not pure luck, exist. This assertion does not change thegeneral observation, however, that ‘‘beating the market’’ is hard.

58 QUANTITATIVE RISK MANAGEMENT

Page 78: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C02 02/14/2012 12:26:28 Page 59

The distinction between risk and uncertainty is usually attributed toKnight (1921) and often called Knightian uncertainty. It is often arguedthat uncertainty or ambiguity is inherently distinct from risk in the sensethat people behave differently in the face of ambiguity than they do whenconfronted with computable or known probabilities (risk). It is argued thatthere is ambiguity aversion separate from risk aversion.

Various paradoxes are said to provide evidence in favor of ambiguityand ambiguity aversion, with probably the best known being the Ellsbergparadox (Ellsberg 1961). I am not convinced by these paradoxes, and Imaintain that frequency-type (objective) and belief-type (subjective) proba-bilities can and should be used interchangeably.

My conclusion that frequency-type and belief-type probabilities can,and indeed should, be used interchangeably is not taken lightly, but on bal-ance, I think we have no other choice in risk management and in our dailylives. The future is uncertain, subject to randomness that is not simply repli-cation of a repeated game. But we have to make decisions, and probabilitytheory is such a useful set of tools that we have to use it. The utility of treat-ing frequency-type and belief-type probabilities as often interchangeableoutweighs any problems involved in doing so.

When using belief-type probabilities, however, we must be especiallycareful. We cannot rely on them in the same way as we can rely onfrequency-type probabilities in a game of chance. We must be honest withourselves that we do not, indeed cannot, always know the probabilities. Thede Finetti game and Bayes’ rule help keep us honest, in the sense of beingboth realistic in uncovering our prior (belief-type) probabilities and consist-ent in updating probabilities in the face of new evidence. The formalism im-posed by careful thinking about belief-type probability may appear awkwardto begin with, but careful thinking about probability pays immense rewards.

E L LSBERG PARADOX

Daniel Ellsberg (b. 1931) has the distinction of being far better knownfor political activities than for his contribution to probability and de-cision theory. Ellsberg obtained his PhD in economics from Harvardin 1962. In 1961, he published a discussion of a paradox that chal-lenges the foundations of belief-type probability and expected utilitytheory. In the late 1960s, Ellsberg worked at the RAND Corporation,contributing to a top-secret study of documents regarding affairs

(continued )

Risk, Uncertainty, Probability, and Luck 59

Page 79: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C02 02/14/2012 12:26:28 Page 60

(continued )associated with the Vietnam War. These documents later came to beknown as the Pentagon Papers. Ellsberg photocopied them, and in1971, they were leaked and first published by the New York Times.At least partially in response to the leaked papers, the Nixon adminis-tration created the ‘‘White House Plumbers,’’ whose apparent firstproject was breaking into Ellsberg’s psychiatrist’s office to try to ob-tain incriminating information on Ellsberg. The Plumbers’ best-known project, however, was the Watergate burglaries.

Ellsberg’s 1961 paper discusses a series of thought experiments inwhich you are asked to bet on draws from various urns. (Althoughpopularized by Ellsberg and commonly known by his name, a versionof this paradox was apparently noted by Keynes 1921, par. 315, fn 2.)

The experiment I discuss here concerns two urns, each having 100balls. For Urn 1, you are told (and allowed to verify if you wish) thatthere are 100 balls, 50 of which are red and 50 black. For Urn 2, incontrast, you are told only that there are 100 balls, with some mix ofred and black (and only red or black); you are not told the exact pro-portions. For the first part of the experiment, you will draw a singleball from Urn 1 and a single ball from Urn 2 and be paid $10 depend-ing on the selection of red versus black. Before you draw, you mustdecide which payoff you prefer:

RED ¼ $10 if Red, $0 if Black

BLACK ¼ $0 if Red, $10 if Black

When asked to choose between the two payoffs, most people willbe indifferent between red versus black for both the first and the sec-ond urn. For Urn 1, we have evidence on the 50/50 split, so we canassign a frequency-type probability of 50 percent to both red andblack. For Urn 2, we do not have any frequency-type information, butwe also do not have any information that red or black is more likely,and most people seem to set their subjective or belief-type probabilityat 50/50 (red and black equally likely).

In the second part of the experiment, you will draw a single balland get paid $10 if red, but you get to choose whether the draw isfrom Urn 1 or Urn 2. It seems that most people have a preference forUrn 1, the urn with the known 50/50 split. (Remember that this is athought experiment, so when I say ‘‘most people’’ I mean Ellsbergand colleagues he spoke with, and also myself and colleagues I havespoken with. Nonetheless, the conclusion seems pretty firm. And

60 QUANTITATIVE RISK MANAGEMENT

Page 80: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C02 02/14/2012 12:26:28 Page 61

because this is a thought experiment, you can try this on yourself andfriends and colleagues.) The preference for red from Urn 1 seems toestablish that people assess red from Urn 1 as more likely than redfrom Urn 2.

Now we get to the crux of the paradox: The preference for Urn 1is the same if the payoff is $10 on black, which seems to establishblack from Urn 1 as more likely than black from Urn 2. In otherwords, we seem to have the following:

Red 1 preferred to Red 2) Red 1 more likely than Red 2.

Black 1 preferred to Black 2) Black 1 more likely than Black 2.

But this is an inconsistency. Red 2 and Black 2 cannot both be lesslikely because that would imply that the total probability for Urn 2 isless than 1.00. (Try it. For any probabilities for Red 1 and Black 1, therelations just given imply that the total probability for Urn 2 is lessthan 1.00.)

Ellsberg claimed that this inconsistency argues for ‘‘uncertaintiesthat are not risk’’ and ‘‘ambiguity’’ and that belief-type or subjectiveprobabilities (as for Urn 2) are different in a fundamental way fromfrequency-type probabilities. Subsequent authors have worked to de-velop theories of probability and expected utility to explain this para-dox (see Epstein 1999; Schmeidler 1989).

There are a few obvious critiques of the paradox. Maybe we sim-ply prefer the easier-to-understand Urn 1, not wanting to waste braincells on thinking through all implications of the problem. Maybe weare deceit averse, wanting to shy away from Urn 2 in case the experi-menter somehow manipulates the red and black balls to our dis-advantage. But I think the paradox goes deeper. When I think longand hard about the problem (I make sure I fully explain the problemto myself and reliably assure myself that I, as the experimenter, willnot cheat), I still prefer the 50/50 Urn 1.

The resolution of the paradox lies in viewing the Ellsberg experi-ment in the context of a larger meta-experiment:

& X percent probability of single draw (original Ellsbergexperiment)

& 1 – X percent probability of repeated draws

Real differences exist between Urn 1 and Urn 2, and Urn 1 is lessrisky (and thus, preferable) in all cases except the Ellsberg single-draw

(continued )

Risk, Uncertainty, Probability, and Luck 61

Page 81: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C02 02/14/2012 12:26:28 Page 62

2 .5 THE CURSE OF OVERCONF I D ENCE

Much of this chapter has been concerned with how our human intuition canbe fooled by randomness and uncertainty. We have seen that it is easy togenerate (random) runs and streaks that seem, intuitively, very nonrandom.Humans, however, crave control over their environment, and we will oftenimpose an illusion of certainty and control over purely random events. It isall too easy, all too tempting, to mistake luck for skill, and the result can be

(continued )experiment. It does not take much thinking to realize that repeateddraws from Urn 2, where we do not know how many are red or black,is more risky than repeated draws from Urn 1, where we know thereare precisely 50 red and 50 black. With Urn 2, I might choose the redpayoff but have the bad luck that there are no red and all black. Forrepeated draws, I am stuck with my initial choice. For a single draw, itdoes not really matter—because I do not have any prior knowledge,and because I get to choose red or black up front, the urn really doesbehave like a 50/50 split. (Coleman 2011b discusses the problem inmore detail and shows how a mixed distribution for Urn 2 will be morerisky for repeated draws than the simple 50/50 distribution of Urn 1.)

So, we have a situation in which for a single draw, Urn 1 andUrn 2 are probabilistically equivalent but for repeated or multipledraws, Urn 1 is preferable. For the meta-experiment, it is only in thespecial case where X ¼ 100 percent that the two urns are equivalent;whenever X < 100 percent, Urn 1 is preferable. Even a small proba-bility that there will be repeated draws leads to Urn 1 being preferred.So, what would be the rational response: Choose Urn 2, which isequivalent to 1 in the single-draw case but worse in any repeated-draw experiment, or for no extra cost, choose Urn 1? The choice isobvious: As long as there is some nonzero chance that the experimentcould involve repeated draws (and psychologically it is hard to ignoresuch a possibility), we should choose Urn 1.

Stated this way, there is no paradox. From this perspective, pref-erence for Urn 1 is rational and fully consistent with expected utilitytheory. In summary, I do not find the Ellsberg paradox to be evidencein favor of ambiguity or uncertainty. I do not see the need for ambigu-ity aversion as a supplement to the standard risk aversion of expectedutility theory. Similarly, I do not believe that we need to amend theconcept of subjective or belief-type probability.

62 QUANTITATIVE RISK MANAGEMENT

Page 82: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C02 02/14/2012 12:26:29 Page 63

overconfidence in our own abilities. There is a fundamental tension here be-cause confidence in one’s abilities is as necessary for successful performancein the financial arena as it is in any area of life, but overconfidence can alsobreed hubris, complacency, and an inability to recognize and adapt to newcircumstances.

Gladwell (2009) wrote an interesting essay discussing the importance ofpsychology, in particular confidence and overconfidence, in the finance in-dustry and in running an investment bank. He focuses specifically on JimmyCayne and the fall of Bear Stearns in 2008 (with interesting digressions tothe debacle of Gallipoli). With hindsight, Cayne’s words and actions canseem to be the purest hubris. But Gladwell argues, convincingly, that suchconfidence is a necessary component of running an investment bank. Ifthose running the bank did not have such optimism and confidence, whywould any customers or competitors have confidence in the bank? And yetsuch confidence can be maladaptive.

Both Gladwell and Mlodinow (2008) discuss the work of the psycholo-gist Ellen Langer and our desire to control events. Langer showed that ourneed to feel in control clouds our perception of random events. In oneexperiment (Langer 1975), subjects bet against a rival. The rival was ar-ranged to be either dapper or a schnook. Against the schnook, subjects betmore aggressively, even though the game was pure chance and no other con-ditions were altered. Subjects presumably felt more in control and moreconfident betting against a nervous, awkward rival than against a confidentone, although the probabilities were the same in both cases.

In another experiment (Langer and Roth 1975), Yale undergraduateswere asked to predict the results of 30 random coin tosses. When queriedafterward, the students behaved as if predicting a random coin toss was askill that could be improved with practice. Subjects for whom tosses weremanipulated to exhibit early streaks (but also so that overall they guessedcorrectly half the time) rated themselves better at the guessing than othersubjects, even though all subjects were correct half the time.

The problem of overconfidence may be the most fundamental and diffi-cult in all of risk management because confidence is necessary for successbut overconfidence can lead to disaster. This situation is made even worseby the natural human tendency to forget past bad events. Maybe that is justpart of the human psyche; it would be hard to survive if past losses re-mained forever painful.

I know of no foolproof way to avoid overconfidence. Possibly the mostinsightful part of Gladwell (2009) is in the closing paragraphs, where hecontrasts the bridge-playing expertise of Cayne and others at Bear Stearnswith the ‘‘open world where one day a calamity can happen that no onehad dreamed could happen’’ (p. 7). This discussion harks back to the

Risk, Uncertainty, Probability, and Luck 63

Page 83: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C02 02/14/2012 12:26:29 Page 64

distinction between frequency-type versus belief-type probability. Bridge isa game of chance, a repeated game with fixed and unchanging rules towhich we can apply the law of large numbers. We may momentarily be-come overconfident as bridge players, but the repeated game will comeback to remind us of the underlying probabilities. The real world, in con-trast, is not a repeated game, and the truly unexpected sometimes happens.And most importantly, because the unexpected does not happen frequently,we may become overconfident for long periods before nature comes back toremind us that the unexpected does occur.

2 .6 LUCK

Luck is the irreducible chanciness of life. Luck cannot be controlled, but itcan be managed.

What do I mean by luck versus risk? Risk is the interaction of the uncer-tainty of future outcomes with the benefits and costs of those outcomes.Risk can be studied and modified. Luck is the irreducible chanciness oflife—chanciness that remains even after learning all one can about possiblefuture outcomes, understanding how current conditions and exposures arelikely to alter future outcomes, and adjusting current conditions and behav-ior to optimally control costs and benefits. Some things are determined byluck, and it is a fool’s errand to try to totally control luck.

The philosopher Rescher (2001) states it well:

The rational domestication of luck is a desideratum that we canachieve to only a very limited extent. In this respect, the seven-teenth-century philosophers of chance were distinctly overoptimis-tic. For while probability theory is a good guide in matters ofgambling, with its predesignated formal structures, it is of limitedusefulness as a guide among the greater fluidities of life. The anal-ogy of life with games of chance has its limits, since we do not andcannot effectively play life by fixed rules, a fact that sharply re-stricts the extent to which we can render luck amenable to rationalprinciples of measurement and calculation. (pp. 138–139)

Rescher’s point is that luck is to be managed, not controlled. The ques-tion is not whether to take risks—that is inevitable and part of the humancondition—but rather to appropriately manage luck and keep the odds onone’s side.

The thrust of this chapter has been twofold: Randomness and luck arepart of the world, and randomness is often hard to recognize and

64 QUANTITATIVE RISK MANAGEMENT

Page 84: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C02 02/14/2012 12:26:29 Page 65

understand. The success or failure of portfolio managers, trading strategies,and firms depends on randomness and luck, and we need to recognize, livewith, and manage that randomness and luck.

In the next chapter, I change gears, moving away from the theory ofprobability and focusing on the business side of managing risk. The insightsand approach to uncertainty discussed in this chapter must be internalizedto appropriately manage risk on a day-to-day basis.

Risk, Uncertainty, Probability, and Luck 65

Page 85: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C02 02/14/2012 12:26:29 Page 66

Page 86: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C03 02/15/2012 9:52:36 Page 67

CHAPTER 3Managing Risk

In the previous chapter, I discussed uncertainty, risk, and the theory ofprobability. Now, I change gears and move from hard science to soft busi-

ness management because when all is said and done, risk management isabout managing risk—about managing people, processes, data, and proj-ects. It is not just elegant quantitative techniques; it is the everyday work ofactually managing an organization and the risks it faces. Managing risk re-quires making the tactical and strategic decisions to control those risks thatshould be controlled and to exploit those opportunities that should beexploited. Managing profits cannot be separated from managing losses orthe prospect of losses. Modern portfolio theory tells us that investment deci-sions are the result of trading off return versus risk; managing risk is justpart of managing returns and profits.

Managing risk must be a core competence for any financial firm. Theability to effectively manage risk is the single most important characteristicseparating financial firms that are successful and survive over the long runfrom firms that are not successful. At successful firms, managing risk alwayshas been and continues to be the responsibility of line managers from theboard through the CEO and down to individual trading units or portfoliomanagers. Managers have always known that this is their role, and goodmanagers take their responsibilities seriously. The only thing that haschanged in the past 10 or 20 years is the development of more sophisticatedanalytical tools to measure and quantify risk. One result has been that thetechnical skills and knowledge required of line managers have gone up.Good managers have embraced these techniques and exploited them toboth manage risk more effectively and make the most of new opportunities.Not all firms and managers, however, have undertaken the human capitaland institutional investments necessary to translate the new quantitativetools into effective management.

The value of quantitative tools, however, should not be overempha-sized. If there is one paramount criticism of the new risk management

67

Page 87: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C03 02/15/2012 9:52:36 Page 68

paradigm, it is that the industry has focused too much on measurement, ne-glecting the old-fashioned business of managing the risk. Managing risk re-quires experience and intuition in addition to quantitative measures. Thequantitative tools are invaluable aids that help to formalize and standardizea process that otherwise would be driven by hunches and rules of thumb,but they are no substitute for informed judgment. Risk management is asmuch about apprenticeship and learning by doing as it is about book learn-ing. Risk management is as much about managing people, processes, andprojects as it is about quantitative techniques.

3 .1 MANAGE PEOPL E

Managing people means thinking carefully about incentives and compensa-tion. Although I do not pretend to have the answers for personnel or incen-tive structures, I do want to emphasize the importance of compensation andincentive schemes for managing risk and building a robust organization thatcan withstand the inevitable buffeting by the winds of fortune. Managingrisk is always difficult for financial products and financial firms, but theprincipal-agent issues introduced by the separation of ownership and man-agement substantially complicate the problems for most organizations.

As discussed in Chapter 2, risk involves both the uncertainty of out-comes and the utility of outcomes. The distribution of outcomes is objectivein the sense that it can, conceptually at least, be observed and agreed uponby everyone. The utility of outcomes, in contrast, depends on individualpreferences and is in essence subjective. The preferences that matter are thepreferences of the ultimate owner or beneficiary. Consider an individualinvestor making his own risk decisions. The problem, although difficult, isconceptually straightforward because the individual is making his own deci-sions about his own preferences. Although preferences might be difficult touncover, in this case at least it is only the preferences of the owner (who isalso the manager of the risk) that matter.

Now consider instead a publicly traded firm—say, a bank or investmentfirm. The ultimate beneficiaries are now the shareholders. As a rule, theshareholders do not manage the firm; they instead hire professional manag-ers and delegate the authority and responsibility for managing the risks. Thepreferences of the shareholders are still the relevant preferences for makingdecisions about risk, but now it is the managers who make most decisions.The shareholders must ensure that the decisions reflect their preferences, buttwo difficulties arise here. The first is that the managers may not know theowners’ preferences, which is a real and potentially challenging problem,but it is not the crux of the problem. Even if the owners’ preferences are

68 QUANTITATIVE RISK MANAGEMENT

Page 88: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C03 02/15/2012 9:52:36 Page 69

known, the second difficulty will intrude: The preferences of the managerswill not be the same as those of the shareholders, and the interests of themanagers and owners will not be aligned. The owners must design a con-tract or compensation scheme that rewards managers for acting in accord-ance with owners’ preferences and punishes them for acting contrary tothose preferences.

This issue goes by the name of the principal-agent problem in the eco-nomics literature.1 The essence of the problem is in addressing the difficul-ties that arise when a principal hires an agent to perform some actions, theinterests (preferences) of the two are not the same, and there is incompleteand asymmetric information so that the principal cannot perfectly monitorthe agent’s behavior. Employer-employee relations are a prime arena forprincipal-agent issues, and employment contracts are prime examples ofcontracts that must address principal-agent problems.

In virtually any employer-employee relationship, there will be some di-vergence of interests. The principal’s interest will be to have some tasks oractions performed so as to maximize the principal’s profit or some otherobjective relevant to the principal. Generally, the agent will have other in-terests. The agent will have to expend effort and act diligently to performthe actions, which is costly to the agent. In a world of perfect information,no uncertainty, and costless monitoring, the principal-agent problem can beremedied. A contract can be written, for example, that specifies the requiredlevel of effort or diligence—rewarding the agent depending on the effortexpended or on the observed outcome of the action. In such a world, theinterests of the principal and agent can be perfectly aligned.

When there is uncertainty, asymmetric information, and costly moni-toring, however, the principal-agent problem comes to the fore and design-ing a contract to align the interests of principal and agent can be verydifficult. A compensation scheme cannot generally be based on the agent’seffort because this effort can be observed only by the agent (asymmetric in-formation) or is costly to monitor (costly monitoring). There will be difficul-ties in basing the compensation scheme on observed outcomes. First, itmight be difficult or impossible to effectively measure the outcomes (costlymonitoring and asymmetric information). Second, because of uncertainty,the outcome might not reflect the agent’s effort; rewarding output may re-ward lazy but lucky agents while punishing diligent but unlucky agents tosuch a degree that it provides no incentive for agents to work hard.

1 See Stiglitz in Eatwell, Milgate, and Newman (1987, The New Palgrave, vol. 3, 966–971 and references therein, including contributions by Ross 1973; Mirrlees 1974,1976; and Stiglitz 1974, 1975). The problem is, of course, much older, with an entryin the original Palgrave’s Dictionary of Economics (1894–1899) by J. E. C. Munro.

Managing Risk 69

Page 89: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C03 02/15/2012 9:52:36 Page 70

Furthermore, rewarding individuals based on individual measures of outputmay destroy incentives for joint effort and lead to free-riding problems.

Risk management usually focuses on the problem of measuring risk andthe decisions that flow from that problem—combining the uncertainty ofoutcomes and the utility of outcomes to arrive at the decisions on how tomanage risk. In the real world, an additional layer of complexity exists—making sure that managers (agents) actually implement the appropriatemeasures, either by ensuring that they have the correct incentives or throughconstant monitoring and control.

Many types of compensation schemes are used in practice, includingfixed versus variable compensation (salaries and bonuses or base and com-mission), deferred compensation, and granting of share ownership with var-ious types and degrees of vesting. Designing compensation and incentiveschemes has to be one of the most difficult and underappreciated, but alsoone of the most important, aspects of risk management. Substantial effort isdevoted to measuring and monitoring risk, but unless those managers whohave the information also have the incentives to act in concert with theowners’ preferences, such risk measurement is useless.

Incentive and compensation schemes are difficult to design—for goodtimes as well as bad times. During good times, it is easier to keep peoplehappy—there is money and status to distribute—but difficult to design in-centives that align the principal’s and agent’s interests. During bad times, itis harder to make people happy—money and status are often in short sup-ply—and it is consequently difficult to retain good people. It is important todesign compensation schemes for both good and bad times and to planfor times when the organization is under stress from both high profits(which breeds hubris and a tendency to ignore risk) and low profits (wheneverybody leaves).

As mentioned at the beginning of this section, I do not have answers forthe puzzles of compensation and incentives. The topic is one, however, thatrewards careful thinking. There is clearly no substitute for monitoring andmeasuring risk, but properly designed incentive schemes can go far towardmanaging and controlling risks. If the interests of managers throughout theorganization can be properly aligned, these managers can move part ofthe way from being disasters in the waiting that require unrelentingmonitoring and control to being allies of the principals in controlling andmanaging risk.

One final issue that I want to mention is the importance of embeddedoptions and payout asymmetry in both compensation and capital structure.In compensation of traders and portfolio managers there is the well-known‘‘trader’s put,’’ in which a trader wins if things go well but loses little ifthings go badly. The trader receives a large bonus in a good year and is let

70 QUANTITATIVE RISK MANAGEMENT

Page 90: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C03 02/15/2012 9:52:36 Page 71

go, with no claw-back of the bonus, in a bad year. Furthermore, traders canoften find another trading position with large upside potential.

For hedge funds, the performance fee is often structured as a percentageof returns above a high-water mark (the high-water mark representingthe highest net asset value previously achieved by the fund). A straight feebased on percentage of returns may encourage leverage and risk taking—behavior that can be discouraged by adjusting the fee for the risk taken, asdiscussed in Coleman and Siegel (1999). The high-water mark is designed(and probably originally intended) to make terms more favorable to the in-vestor but, in fact, acts as a put option on returns. The manager receives feesin good times but after a period of losses will not earn performance fees.The payout becomes asymmetric, with performance fees if things go wellbut no fee penalty if they go badly (and if things go really badly, the man-ager may be able to close the fund and start again with a new and lowerhigh-water mark). Thus, a high-water mark may hurt rather than helpthe investor.

The capital structure of publicly traded companies provides the finaland possibly the most interesting example of embedded options. A classicarticle by Merton (1974) shows how shares of a publicly traded companywhose capital structure includes both shares and bonds are equivalent to acall on the value of the company (and the risky bond includes a put option).The call option means that shareholders benefit from increased volatility inthe value of the company assets (because the value of a call increases asvolatility increases), to the detriment of bondholders. This effect becomesparticularly important when the firm value is near the par value of thebonds and the company is thus near default. This way of thinking aboutshare value raises the intriguing possibility that shareholders will have anincentive to take on more risk than desired by debtholders and possiblyeven more than company employees desire, particularly when a company isnear default.

In the end, careful thinking about preferences, incentives, compensa-tion, and principal-agent problems enlightens many of the most difficultissues in risk management—issues that I think we as a profession have onlybegun to address in a substantive manner.

3 .2 MANAGE IN FRASTRUCTURE—PROCESS ,T ECHNOLOGY , DATA

Process and procedure, and the whole arena of operational process andcontrols, are critically important. These aspects of management arealso vastly underappreciated. Many financial disasters—from large and

Managing Risk 71

Page 91: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C03 02/15/2012 9:52:36 Page 72

world-renowned ones such as Barings Bank’s collapse of 1995 to un-publicized misfortunes on individual trading desks—are the result of simpleoperational problems or oversights rather than complex risk managementfailures. To coin a phrase, processes and procedures are not rocket science;nonetheless, losses in this arena hurt as much as any others, possibly moreso because they are so easy to prevent and are so obvious after the fact.From Lleo (2009):

Jorion (2007) drew the following key lesson from financial disas-ters: Although a single source of risk may create large losses, it isnot generally enough to result in an actual disaster. For such anevent to occur, several types of risks usually need to interact. Mostimportantly, the lack of appropriate controls appears to be a deter-mining contributor. Although inadequate controls do not triggerthe actual financial loss, they allow the organization to take morerisk than necessary and also provide enough time for extreme lossesto accumulate. (p. 5)

Techno l o gy and Da t a

Risk management and risk measurement projects are as much aboutboring data and information technology (IT) infrastructure as aboutfancy quantitative techniques; after all, if you do not know what youown, it is hard to do any sophisticated analysis. In building or imple-menting a risk management project, often 80 percent of the effort andinvestment is in data and IT infrastructure and only 20 percent in so-phisticated quantitative techniques.

I cannot overemphasize the importance of data and the IT infra-structure required to store and manipulate the data for risk analytics. Formarket risk, and credit risk even more, good records of positions and coun-terparties are critical, and these data must be in a form that can be used. Aninterest rate swap must be stored and recognized as a swap, not forced intoa futures system. The cost and effort required to build, acquire, and main-tain the data and IT infrastructure should not be underestimated, but nei-ther should they stand as a significant impediment to implementing a riskmanagement project. Building data and IT infrastructure is, again,not rocket science, and the available IT tools have improved vastly overthe years.

Often the best return per dollar invested in risk management projectswill be in the basic infrastructure—data, IT, operations, daily reporting,and the people to support these activities. These are not high profile

72 QUANTITATIVE RISK MANAGEMENT

Page 92: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C03 02/15/2012 9:52:36 Page 73

areas of the business but there can be big benefits to getting the basics torun smoothly. There is often a presumption on the part of front-desktraders and senior managers that basic infrastructure is taken care of anddone well. In reality, back office, operations, middle office, data, and ITinfrastructure are too often starved for resources relative to the sophisti-cated and high profile quantitative arenas. Until, of course, a failure inthe basic infrastructure contributes to a large loss, costing many yearsof profits.

3 .3 UNDERSTAND THE BUS IN ESS

A cardinal rule of managing risk is that managers must understand risk.Managers must understand the risks embedded in the business, and theymust understand the financial products that make up the risk. This is a sim-ple and obvious rule but one that is often violated: Do the bank board mem-bers and CEO understand interest rate or credit default swaps? And yetthese instruments make up a huge portion of the risk of many financialfirms. And how often, when a firm runs afoul of some new product, has itturned out that senior managers failed to understand the risks?

Managers, both midlevel and senior, must have a basic understandingof and familiarity with the products they are responsible for. In many cases,this means improving managers’ financial literacy. Many financial products(derivatives in particular) are said to be so complex that they can be under-stood only by rocket scientists using complex models run on supercom-puters. It may be true that the detailed pricing of many derivatives requiressuch models and computer power, but the broad behavior of these sameproducts can often be surprisingly simple, analyzed using simple modelsand hand calculators. Many in research and trading benefit from the auraand status acquired as keepers of complex models, but a concerted effortmust be made to reduce complex products to simple ideas. I do not wish toimply that dumbing down is advisable but rather that improved educationfor managers is required, together with simple and comprehensible explan-ations from the experts.

Simple explanations for thinking about and understanding risk are in-valuable, even indispensable. In fact, when a simple explanation for the riskof a portfolio does not exist, it can be a sign of trouble—that somewherealong the line, somebody does not understand the product or the risk wellenough to explain it simply and clearly. Even worse, it may be a sign thatsomebody does understand the risks but does not want others tounderstand.

Managing Risk 73

Page 93: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C03 02/15/2012 9:52:36 Page 74

I N T ER EST RATE SWAPS AND CRED I T D E FAULTSWAPS : A LONG D IGRESS I ON 2

This book is not a text on financial products or derivatives, but in thislong digression I discuss two simple examples: interest rate swaps andcredit default swaps. The goal is twofold. First, I want to show howthe fundamental ideas can be easily presented even for products thatare usually considered complex. Second, I want to show how thesesimple explanations have practical application in understanding whathappens in financial markets.

INTEREST RATE SWAPS AND LTCM

Interest rate swaps (IRSs) are by now old and well-established finan-cial instruments. Even so, they are often considered complex. In fact,they are very simple. For most purposes, particularly changes in inter-est rates, an IRS behaves like a bond. Its profit and loss (P&L) has thesame sensitivity as a bond to changes in interest rates but with no (or,more precisely, much reduced) credit risk.

I assume that readers have a basic knowledge of how an interestrate swap is structured—that a swap is an agreement between twoparties to exchange periodic fixed payments for floating interest ratepayments for an agreed period.3 Say that we are considering a four-year swap, receiving $5 annually and paying the floating rate annu-ally.4 The cash flows for the swap look like Panel A of Figure 3.1. Oneyear from now, we receive $5 and pay the floating rate (which is set inthe market today). In two years, we receive $5 and pay the appropri-ate floating rate (the rate that will be set at Year 1). On each paymentdate, we exchange only the net cash flow, so at Year 1 we would re-ceive $1.50 if today’s floating rate were 3.50 percent ($5.00 – $3.50).

Understanding how to value the swap and what the risk is (that is,how it will move as underlying markets move) is not obvious fromPanel A of Figure 3.1. We can use a simple trick, however, to makethe valuation and risk clear. Because only net cash flows are

2Note that this section is a digression that can be read independently of the rest ofthe chapter.3 See Coleman (1998b) for a complete discussion.4 Standard swaps in U.S. dollars involve semiannual payments on the fixed side andquarterly on the floating side, but I use annual payments here just to make the dia-grams easier.

74 QUANTITATIVE RISK MANAGEMENT

Page 94: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C03 02/15/2012 9:52:36 Page 75

exchanged on each payment date, it makes no difference to net overallvalue if we insert þ$100 and –$100 at the end. It does, however, com-pletely alter our view of the swap. Now we can view it as being long afixed-coupon, four-year 5 percent bond and short a floating-rate bond,as shown in Panel B. Furthermore, a floating-rate bond is alwaysworth $100 today, so we now know that the value of the swap is justthe difference between the values of two bonds:

PV Swap to receive $5 for 4 yearsð Þ ¼ PV 4-year 5% bondð Þ � 100

Not only do we know the value; we also know the interest raterisk: the risk of the swap will be exactly the same as the risk of thefixed-coupon bond (because a floating-coupon bond is always at parand has no interest rate risk).5

We thus have a very simple explanation of how any standard IRSwill behave—like a bond of the same coupon, maturity, and notionalamount. This approach may not be precise enough for trading swapsin today’s competitive markets (we are ignoring details about daycounts, and so on), but it is more than adequate for understanding thebroad outlines of what a swap is and how a swap portfolio works.

(continued )

$100

PV (swap rec 5%) = +PV (5% fixed-coupon bond) – 100

Floating Coupon(initially set today thenreset every year)

Floating Coupon(worth $100 today)

Fixed Coupon(e.g., $5/year)

Fixed Coupon

$100

B. Long Fixed Bond, Short Floating BondA. Swap

FIGURE 3.1 Swap to Receive $5.00 Annual Fixed (Pay Floating) andEquivalence to Long Fixed Bond, Short Floating Bond

5The exact equivalence between the swap and the net of the fixed coupon bond lessthe floating bond holds only for the instant before the first floating coupon is set andignores any differences in day counts or other technical details. Furthermore, therewill be some (although small) credit risk embedded in the swap because of counter-party exposure. I ignore these issues for now because they do not matter for under-standing the major component of the risk—the change in value with interest rates.

Managing Risk 75

Page 95: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C03 02/15/2012 9:52:36 Page 76

(Continued )We can, in fact, use this straightforward view of swaps to help

understand what happened with the fund Long-Term Capital Man-agement6 (LTCM) in 1998. LTCM was a large hedge fund thatspectacularly collapsed in September 1998 as a result of marketdisruptions following Russia’s de facto debt default that August. Atthe beginning of 1998, LTCM’s capital stood at $4.67 billion, but bythe bailout at the end of September, roughly $4.5 billion of that hadbeen lost. LTCM lost virtually all its capital.

The demise of LTCM is a fascinating story and has been exten-sively discussed, with the account of Lowenstein (2000) being particu-larly compelling (also see Jorion [2000] for an account). Many reasonscan be given for the collapse, and I do not pretend that the completeexplanation is simple, but much insight can be gained when one recog-nizes the size of the exposure to swaps. Lowenstein (2000, 187)recounts a visit by Federal Reserve and Treasury officials to LTCM’soffices on September 20, during which officials received a run-throughof LTCM’s risk reports. One figure that stood out was LTCM’s expo-sure to U.S. dollar-denominated swaps: $240 million per 15 basispoint (bp) moves in swap spreads (the presumed one standard devia-tion move).

As discussed earlier, receiving fixed on a swap is equivalent to be-ing long a fixed-coupon bond, as regards sensitivity to moves in inter-est rates. The relevant interest rates are swap rates, not U.S. Treasuryor corporate bond rates.7 U.S. swap rates will usually be above U.S.Treasury rates and below low-rated corporate yields, although byexactly how much will vary over time.8 The swap spread—the spreadbetween swap rates and U.S. Treasury rates—will depend on the rela-tive demand for U.S. Treasuries versus U.S. swaps. During a period of

6Commonly referred to by the name of the management company, Long-Term Cap-ital Management (LTCM).7 It may sound circular to say U.S. swaps depend on U.S. swap rates, but it is nomore so than saying U.S. Treasuries depend on U.S. Treasury rates.8 Before 2008, I would have said that swap rates are always above Treasury rates,but since November 2008, 30-year swap rates have remained consistently belowTreasury rates (with spreads as wide as –40 bps). This is generally thought to be theresult of disruption in the repurchase agreement market and low risk appetite amongdealers, combined with high demand from corporate customers to receive fixedpayments. The combination has put downward pressure on swap rates relative toTreasury rates.

76 QUANTITATIVE RISK MANAGEMENT

Page 96: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C03 02/15/2012 9:52:36 Page 77

high risk aversion, such as during the 1998 Russia crisis, there willgenerally be an increase in demand for Treasuries as investors flock toa safe haven. This flight to safety will push the swap spread higher.

Whatever the determinants of the swap spread, it is common fortraders to take positions with respect to the spread. Going short thespread (to benefit when the normally positive spread narrows ormoves closer to zero) means going long swaps or receiving fixed—equivalent to going long a fixed-coupon bond and then going shortU.S. Treasuries:

Short swap spreads ¼ Receive fixed on swaps long swapsð Þ versus shortU:S: Treasuries

There will be no net exposure to the level of rates because if bothTreasury and swap rates go up, the swap position loses but the Trea-sury position benefits. There will be exposure to the swap spread be-cause if swap rates go down and Treasury rates go up, there will be aprofit as both the swap position (like a long bond position) benefitsfrom falling rates and the short U.S. Treasury position benefits fromrising Treasury rates.

LTCM’s position was such that it benefited to the tune of$240 million for each 15 bp narrowing in U.S. swap spreads, or$16 million per single bp. We can easily calculate how large a notionalposition in bonds this exposure corresponds to. Ten-year swap rates inSeptember 1998 were about 5.70 percent. Thus, a $1 million notionalposition in 10-year bonds (equivalent to the fixed side of a 10-yearswap) would have had a sensitivity of about $750 per bp.9 This analy-sis implies that the swap spread position was equivalent to a notionalbond position of about $21.3 billion, which was a multiple ofLTCM’s total capital. Furthermore, the $21.3 billion represented onlythe U.S. dollar swap spread exposure. There was also exposure toU.K. swap spreads and to other market risk factors.

We can also easily calculate that a 45 bp move in swap spreadswould have generated a profit or loss of $720 million. LTCM had esti-mated that a one-year move of one standard deviation was 15 bps.Three standard deviations would be very unlikely for normally distrib-uted spreads (roughly 0.1 percent probability), but financial variablestend to have fat tails—thus, the possibility of a three standard devia-tion move should not be ignored. Indeed, from April through the end

(continued )

9 See Coleman (1998) for a discussion of bond and swap sensitivity, or DV01.

Managing Risk 77

Page 97: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C03 02/15/2012 9:52:37 Page 78

(Continued )of August, 10-year U.S. swap spreads moved by almost 50 bps. Thismove is not so surprising when we consider that the default by Russiatriggered a minor financial panic: ‘‘The morning’s New York Times(August 27, 1998) intoned, ‘The market turmoil is being compared tothe most painful financial disasters in memory.’ . . . Everyone wantedhis money back. Burned by foolish speculation in Russia, investorswere rejecting risk in any guise, even reasonable risk.’’10 Everyone piledinto the safe haven of U.S. Treasuries, pushing swap spreads higher.

A loss of $720 million would have been 15 percent of LTCM’sbeginning-year capital. We have to remember, however, that this anal-ysis accounts only for the exposure to U.S. swap spreads. IncludingU.K. spreads would increase the number. Furthermore, the swap posi-tions were so large (the U.S. position equivalent to $21.3 billion no-tional) that they could not be quickly liquidated, meaning that LTCMhad no practical choice but to live with the losses. In the end, fromJanuary 1998 through the bailout, LTCM suffered losses of $1.6 bil-lion because of swaps.11

This is by no means a full explanation of LTCM’s collapse, but itis very instructive to realize that many of LTCM’s problems resultedfrom large, concentrated, directional trades. The swap spread positionwas a directional bet on the swap spread—that the spread would nar-row further from the levels earlier in the year. Instead of narrowing,swap spreads widened dramatically during August and September.LTCM simply lost out on a directional bet.

Swap spreads were one large directional bet, and long-term equityvolatility was another.12 Together, swap spreads and equity volatilityaccounted for $2.9 billion of losses out of a total of $4.5 billion.As Lowenstein says, ‘‘It was these two trades that broke the firm’’(p. 234). There is much more to understanding LTCM’s demise than

10Lowenstein (2000, 153–154).11 Lowenstein (2000, 234).12According to Lowenstein (2000, 126), LTCM had positions equivalent to roughly$40 million per volatility point in both U.S. and European stock markets. (A volatil-ity point is, say, a move from 20 to 21 in implied volatility. An example of an im-plied volatility index is the VIX index of U.S. stock market volatility.) Impliedvolatility for such options rose from roughly 20 percent to 35 percent (from early1998 to September of that year), implying roughly $1.2 billion in losses. The actuallosses from swaps were about $1.6 billion and from equity volatility, about$1.3 billion (Lowenstein 2000, 234).

78 QUANTITATIVE RISK MANAGEMENT

Page 98: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C03 02/15/2012 9:52:37 Page 79

this simple analysis, including the role of leverage and, importantly,the decisions and human personalities that led to taking such large po-sitions. Lowenstein (2000) and Jorion (2000) cover these in detail, andLowenstein’s book in particular is a fascinating read. Nonetheless,this example shows how a simple, broad-stroke understanding of aportfolio and its risks is invaluable.

CREDIT DEFAULT SWAPS AND AIG

The market for credit default swaps (CDSs) has grown from nothing inthe mid-1990s to a huge market today. CDSs are often portrayed ascomplex, mysterious, even malevolent, but they are really no more com-plex or mysterious than a corporate bond. Indeed, a CDS behaves, inalmost all respects, like a leveraged or financed floating-rate corporatebond. The equivalence between a CDS and a floating-rate bond is veryuseful because it means that anyone acquainted with corporate bonds—anyone who understands how and why they behave in the market asthey do, how they are valued, and what their risks are—understands themost important aspects of a CDS. In essence, a CDS is no harder (and noeasier) to value or understand than the underlying corporate bond.

Once again I assume that readers have a basic knowledge of creditdefault swaps.13 A CDS is an agreement between two parties toexchange a periodic fixed payment in return for the promise to payany principal shortfall upon default of a specified bond. Figure 3.2

(continued )

13 See Coleman (2009) for a complete discussion.

Repayment of Loss upon Default= 100 – Recovery

Risky Premiums= C if No Default

FIGURE 3.2 Timeline of CDS Payments, SellProtection

Managing Risk 79

Page 99: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C03 02/15/2012 9:52:37 Page 80

(Continued )shows the CDS cash flows over time. The periodic premium paymentis agreed up front, and (assuming I sell CDS protection) I receive pre-miums until the maturity of the CDS or default, whichever occursfirst. If there is a default, I must cover the principal value of the bond:I must pay 100 less recovery (the recovery value of the bond). Thispayment of the principal amount is obviously risky, and because thepremiums are paid to me only if there is no default, the premiums arealso risky.

The details of CDSs are indeed more difficult to understand thanthose of many other securities, more difficult than bonds or interestrate swaps, but the equivalence between a CDS and a corporate bondmentioned earlier means that a broad view of how and why CDSsbehave as they do is easy to grasp.

To see why a CDS behaves like a floating-rate bond or note(FRN), consider a CDS for which I receive the periodic fixed paymentsand promise to pay principal loss upon default of some bond or somecompany. That is, I sell CDS protection, which we will see shortly isthe same as buying a financed FRN. Figure 3.2 shows the CDS cashflows: I receive premiums until the maturity or default, and I pay outthe principal amount upon default.

Now we can use an elegant trick—in essence, the same as thatused for the interest rate swap earlier. With any swap agreement, onlynet cash flows are exchanged. This means we can insert any arbitrarycash flows we wish so long as the same amount is paid and received atthe same time and the net is zero. Let us add and subtract LIBOR14

payments at each premium date and also 100 at CDS maturity butonly when there is no default. These LIBOR payments are thus risky.But because they net to zero, they have absolutely no impact on theprice or risk of the CDS. Panel A of Figure 3.3 shows the original CDSplus these net zero cash flows. Panel B of Figure 3.3 rearranges thesecash flows in a convenient manner:

& An FRN by combining

& The CDS premium and þLIBOR into a risky floating coupon,paid only if there is no default.

& þ100 into a risky principal repayment, paid only if there is nodefault.

14 LIBOR is the London Interbank Offered Rate, a basic, short-term interest rate.

80 QUANTITATIVE RISK MANAGEMENT

Page 100: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C03 02/15/2012 9:52:37 Page 81

& Conversion of the payment of –Recovery into receivingþRecovery, paid only if there is default (note that paying aminus amount is the same as receiving a positive amount).

& A LIBOR floater by combining

& –LIBOR into a risky floating coupon, paid until default ormaturity, whichever occurs earlier.

& 100 paid at maturity if there is no default.

& 100 paid at default if there is default.

In Panel B, the FRN behaves just like a standard floating-ratebond or note (FRN): if no default occurs, then I receive a coupon

(continued )

Repayment ofLoss upon Default= 100 – Recovery

Risky Premiums= C if No Default

+

A. CDS (sell protection) + Net Zero Cash Flows

Risky LIBOR Payments= L if No Default

Risky Principal= 100 if No Default

Risky FRN Payments= C + L if No Default

+

B. FRN + Floater of Indeterminate Maturity

Risky Principal= 100 if No Default

100 uponDefault

Risky LIBOR Payments= L if No Default

Risky Principal= 100 if No Default

Recovery upon Default

FIGURE 3.3 CDS Payments Plus Offsetting Payments Equal FRN Less LIBORFloater

Managing Risk 81

Page 101: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C03 02/15/2012 9:52:37 Page 82

(Continued )(LIBOR þ Spread) and final principal at maturity, and if defaultoccurs, then I receive the coupon up to default and then recovery. TheLIBOR floater in Panel B looks awkward but is actually very simple: itis always worth 100 today. It is a LIBOR floating bond with maturityequal to the date of default or maturity of the CDS: Payments areLIBOR þ 100 whether there is a default or not, with the date of the100 payment being determined by the date of default (or CDS matu-rity). The timing of the payments may be uncertain, but that does notaffect the price, because any bond that pays LIBOR þ 100, when dis-counted at LIBOR (as is done for CDSs), is worth 100 irrespective ofmaturity (that is, irrespective of when the 100 is paid).

This transformation of cash flows is extraordinarily useful becauseit tells us virtually everything we want to know about the broad howand why of a CDS.15 Selling CDS protection is the same as owning thebond (leveraged—that is, borrowing the initial purchase price of thebond). The CDS will respond to the credit spread of the underlyingbond or underlying company in the same way as the FRN would. Thisview of a CDS is quite different from the usual explanation of a CDS asan insurance product—that the seller of protection insures the bondupon default. Treating a CDS as an insurance contract is technicallycorrect but profoundly uninformative from a risk management per-spective, providing virtually no insight into how and why a CDS be-haves as it does. In fact, a corporate bond can be treated as embeddingan implicit insurance contract.16 The insurance view of a corporatebond, like the insurance view of a CDS, is technically correct but gener-ally uninformative from a portfolio risk management point of view,which is why corporate bonds are rarely treated as insurance products.

Having a simple and straightforward understanding of a CDS asan FRN can be very powerful for understanding the risk of portfoliosand how they might behave. We can, in fact, use this approach to gaina better understanding of what brought AIG Financial Products (FP)to its knees in the subprime debacle of the late 2000s. According topress reports, in 2008, AIG FP had notional CDS exposure to highly

15 The equivalence is not exact when we consider FRNs that actually trade in themarket. The technical issue revolves around payment of accrued interest upondefault (see Coleman 2009). Although it may not be good enough for trading in themarkets, the equivalence is more than satisfactory for our purposes.16 See Coleman (2009) for a discussion and also the mention by Stiglitz in Eatwell,Milgate, and Newman (1987, The New Palgrave, vol. 3, 967).

82 QUANTITATIVE RISK MANAGEMENT

Page 102: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C03 02/15/2012 9:52:37 Page 83

rated CDSs of roughly $450 billion to $500 billion, with about $60billion exposed to subprime mortgages and the balance concentratedin exposure to banks.17 Viewing CDSs as leveraged FRNs has two im-mediate results. First, it reinforces how large a position $450 billionactually is. Outright purchase of $450 billion of bonds, with exposureconcentrated in financial institutions and subprime mortgages, cer-tainly would have attracted the attention of senior executives at AIG(apparently, the CDS positions did not). Even the mere recognitionthat the CDS position is, for all intents and purposes, $450 billionof bonds with all the attendant risks might have prompted a littlemore scrutiny.

The second result is that it allows us to easily calculate the risk of$450 billion of CDSs, in terms of how much the value might change ascredit spreads change. I am not saying that we can calculate AIG FP’sexact exposure, but we can get an order-of-magnitude view of what itprobably was. We can do this quite easily using the equivalence be-tween CDSs and FRNs. Most CDSs are five-year maturities, and rateswere about 5.5 percent in 2008. A five-year par bond (FRN) with arate of 5.5 percent has a sensitivity to credit spreads, or credit DV01,of about $435 per basis point for $1 million notional.18 Thus, $450billion of bonds would have sensitivity to credit spreads of veryroughly $200 million per basis point. Once again, this analysis empha-sizes how large the position was.

(continued )

17The Economist (‘‘AIG’s Rescue: Size Matters’’ 2008) reported June 2008 notionalexposure of $441 billion, of which $58 billion was exposed to subprime securitiesand $307 billion exposed to ‘‘instruments owned by banks in America and Europeand designed to guarantee the banks’ asset quality.’’ Bloomberg (Holm and Popper2009) reported that AIG FP ‘‘provided guarantees on more than $500 billion ofassets at the end of 2007, including $61.4 billion in securities tied to subprime mort-gages.’’ The Financial Times (Felsted and Guerrera 2008) reported that ‘‘based onmid-2007 figures, AIG had $465 billion in super-senior credit default swaps.’’18 The interest rate risk of an FRN is close to zero because coupons change with thelevel of rates. The credit spread risk of an FRN will be roughly the same as thespread risk of a fixed-rate bond (technically, a fixed-rate bond with coupons fixed atthe forward floating rate resets). For a fixed-rate bond, the spread risk and the inter-est rate risk will be close to the same. In other words, to find the credit spread risk ofan FRN, we simply need to calculate the interest rate risk of a fixed-coupon bondwith its coupon roughly equal to the average floating coupon, which will be the fixedcoupon of a par bond with the same maturity.

Managing Risk 83

Page 103: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C03 02/15/2012 9:52:37 Page 84

3 .4 ORGAN I ZAT I ONAL STRUCTURE

It is critically important to address the question of what role and organiza-tional structure are best for risk management and risk measurement. Thisquestion is closely tied to corporate governance (and regulatory) issues. Ireview these issues but do not delve into them in detail. The topic is impor-tant and should not be glossed over, but it is outside my particular expertise.Furthermore, there is a substantial literature on corporate governance thatreaders can access.

(Continued )With a risk of $200 million per basis point, a widening of 10 bps

in the spread would generate $2 billion of losses. A move of 50 bpswould generate roughly $10 billion in losses. A 50 bp move in AAAspreads is large by pre-2008 historical standards, but not unheard of.Unfortunately, from mid-2007 through early 2008, spreads on five-year AAA financial issues rose from about 50 bps to about 150 bps.By the end of 2008, spreads had risen to roughly 400 bps; with a riskof $200 million per basis point, this change in spreads would meanlosses of $70 billion.19

The exposure of $200 million is not precise, and the moves in ag-gregate spreads would not track exactly the spreads that AIG FP wasexposed to. Nonetheless, given the size of the exposure and the movesin spreads, it is not hard to understand why AIG FP suffered largelosses. AIG FP had a huge, concentrated, directional position in sub-prime, bank, and other bonds with exposure to the financial sector.AIG FP was betting (whether by intent or accident) that spreads wouldnot widen and that the firm would thus earn the coupon on the CDS.The bet simply went wrong. As with LTCM, there is far more to thestory than just a spread position (including, as with LTCM, leverageand the human component that led to the positions), but recognizingthe large directional nature of AIG’s positions makes the source of thelosses easier to understand. It does not completely explain the inci-dent, but it does shed valuable light on it.

19 Spreads went back down to roughly 250 bps by early 2010 (figures fromBloomberg). Not all of AIG’s positions would have been five years, nor would theyall have been financials, but this analysis gives an order-of-magnitude estimate forthe kinds of spread movements seen during this period.

84 QUANTITATIVE RISK MANAGEMENT

Page 104: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C03 02/15/2012 9:52:37 Page 85

Two references are particularly valuable. Crouhy, Galai, and Mark(2001, ch. 3) cover a broad range of issues concerned with risk managementin a bank. They start with the importance of defining best practices, in termsof policies, measurement methodologies, and supporting data and infra-structure. They also discuss defining risk management roles and responsibil-ities, limits, and limit monitoring. Crouhy, Galai, and Mark (2006, ch. 4)focus more on the corporate governance aspect and on defining and devolv-ing authority from the board of directors down through the organization.

I discuss the issues of organizational structure and corporate governancefrom the perspective of a large publicly traded firm, owned by shareholderswhose interests are represented by a board of directors. I assume that the firmhas a senior management committee responsible for major strategic decisions.Most or all the discussion that follows could also be translated in an obviousmanner to a smaller or privately held firm—for example, by substituting theowner for the board or the CEO for the senior management committee.

I start with the role of the board of directors and senior management,following Crouhy, Galai, and Mark (2006, ch. 4). Starting with the boardand senior management has to be correct if we truly believe that managingrisk is a central function of a financial firm. Crouhy, Gailai, and Mark (2006)specify the role of the board as understanding and ratifying the business strat-egy and then overseeing management, holding management accountable. Theboard is not there to manage the business but rather to clearly define the goalsof the business and then hold management accountable for reaching thosegoals. Although this view runs contrary to the view of a director at alarge financial group who claimed that ‘‘A board can’t be a risk manager’’(Guerrera and Larsen 2008), in fact the board must manage risk in the sameway it manages profits, audit, or any other aspect of the business—not opera-tional management but understanding, oversight, and strategic governance.

For practical execution of the strategic and oversight roles, a board willoften delegate specific responsibility to committees. I will consider as anexample an archetypal financial firm with two committees of particular im-portance for risk—the risk management committee and the audit commit-tee. Not all firms will have both, but the roles and responsibilities describedmust be met in one form or another.

The risk management committee will have responsibility for ratifyingrisk policies and procedures and for monitoring the effective implementa-tion of these policies and procedures. As Crouhy, Galai, and Mark (2006)state, the committee ‘‘is responsible for independently reviewing the identi-fication, measurement, monitoring, and controlling of credit, market, andliquidity risks, including the adequacy of policy guidelines and systems’’(p. 94). One area where I diverge from Crouhy, Galai, and Mark slightly(by degree, not qualitatively) is in the level of delegation or devolution of

Managing Risk 85

Page 105: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C03 02/15/2012 9:52:37 Page 86

responsibility. I believe that risk is so central to managing a financial firmthat the board should retain primary responsibility for risk. The risk com-mittee is invaluable as a forum for developing expertise and advice, but theboard itself should take full responsibility for key strategic risk decisions.

An inherent contradiction exists, however, between the board’s respon-sibility to carry out oversight and strategic governance, on the one hand,and to select truly independent nonexecutive directors, on the other. Criti-cal understanding and insight into the complex risks encountered by finan-cial firms will generally be acquired through experience in the financialindustry. Nonexecutive directors from outside the industry will often lackthe critical skills and experience to properly hold managers and executivesaccountable—that is, to ask the right questions and understand the answers.Crouhy, Galai, and Mark (2006, 92) propose an interesting solution, estab-lishing a ‘‘risk advisory director.’’ This person would be a member of theboard (not necessarily a voting member) specializing in risk. The role wouldbe to support board members in risk committee and audit committee meet-ings, both informing board members with respect to best-practice risk man-agement policies, procedures, and methodologies and also providing aneducational perspective on the risks embedded in the firm’s business.

Most large financial firms have an audit committee that is responsiblefor ensuring the accuracy of the firm’s financial and regulatory reportingand also compliance with legal, regulatory, and other key standards. Theaudit committee has an important role in ‘‘providing independent verifica-tion for the board on whether the bank is actually doing what it says it isdoing’’ (Crouhy, Galai, and Mark 2006, 91). There is a subtle differencebetween this role and the role of the risk management committee. The auditcommittee is rightly concerned with risk processes and procedures. The au-dit committee focuses more on the quality and integrity of the processes andsystems, the risk committee more on the substance.

Crouhy, Galai, and Mark (2006, 95) rightly place responsibility fordeveloping and approving business plans that implement the firm’s strategicgoals with the firm’s senior management. Risk decisions will usually be dele-gated to the senior risk committee of the firm. Because risk taking is so inex-tricably linked with profit opportunities, the risk committee must include thefirm’s CEO and senior heads of business units, in addition to the chief riskofficer (CRO), chief financial officer, treasurer, and head of compliance.

Regarding the organizational structure within the firm itself, the stan-dard view is laid out most clearly in Crouhy, Galai, and Mark (2006). ACRO and ‘‘risk management group’’ are established, independent of thebusiness or trading units. The senior risk committee delegates to the CROresponsibility for risk policies, methodologies, and infrastructure. The CROis ‘‘responsible for independent monitoring of limits [and] may order posi-tions reduced for market, credit, or operational concerns’’ (p. 97).

86 QUANTITATIVE RISK MANAGEMENT

Page 106: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C03 02/15/2012 9:52:37 Page 87

I have a different view, one that is somewhat at variance withaccepted wisdom in the risk management industry. I do believe there mustbe an independent risk monitoring and risk measuring unit, but I alsobelieve that ultimate authority for risk decisions must remain with themanagers making trading decisions. Risk is a core component of tradingand portfolio management that cannot be dissociated from managingprofits, so the management of risk must remain with the managers of thebusiness units. It must ultimately reside with the CEO and senior manage-ment committee and devolve down through the chain of management toindividual trading units.

Decisions about cutting positions are rightly the responsibility of thosemanagers with the authority to make trading decisions. To my mind, thereis a fundamental conflict in asking a CRO to be responsible for cutting posi-tions without giving that CRO the ultimate authority to make trading deci-sions. The CRO either has the authority to take real trading decisions, inwhich case he or she is not independent, or the CRO is independent of trad-ing, in which case he or she cannot have real authority.

This view is at variance with the accepted wisdom that proposes a CROwho is independent and who also has the authority to make trading deci-sions. I believe that the accepted wisdom embeds an inherent contradictionbetween independence and authority. I also believe that the accepted wis-dom can perilously shift responsibility from managers and may lull manag-ers into a false sense that risk is not their concern because it is beingmanaged elsewhere in the organization.

Nonetheless, independence of risk monitoring and risk measurement iscritical. Firms already have a paradigm for this approach in the role thataudit and finance units play in measuring and monitoring profits. Nobodywould suggest that traders or portfolio managers be responsible for produc-ing the P&L statements of the firm. These are produced by an independentfinance unit and subject to careful auditing. Areas throughout the organiza-tion rely on this P&L and recognize the importance of having verifiable,independent numbers. Risk should be thought of in the same way—infor-mation crucial to the organization that must be independently producedand verifiable.

My view of the organizational structure of a risk group is summarizedin Figure 3.4. The center of the figure, the core down the middle, shows theprimary responsibility for managing risk.20 Managing P&L and other

20 This organizational layout differs from, for example, Crouhy, Galai, and Mark(2006, fig. 4.2) in emphasizing the central role for the board and senior manage-ment in monitoring and enforcing risk guidelines, with the risk unit playing a sup-porting role in ensuring integrity of risk reporting, developing risk policy, advising,and so on.

Managing Risk 87

Page 107: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C03 02/15/2012 9:52:37 Page 88

Boa

rd

Key

Obj

ecti

ves

-Def

ine

and

rat

ify

busi

ness

str

ateg

y (i

nclu

din

g ri

sk a

ppet

ite)

-Rat

ify

key

polic

ies

and

pro

ced

ures

-Ens

ure

appr

opri

ate

polic

ies,

pro

ced

ures

, inf

rast

ruct

ure

are

in p

lace

to

supp

ort b

usin

ess

goal

s (i

nclu

din

g ri

sk m

onit

orin

g an

d r

epor

ting

)To

ols

and

Mec

hani

sms

-Aud

it c

omm

itte

e (r

espo

nsib

le fo

r fi

nanc

ial a

nd r

egul

ator

y re

port

ing,

pos

sibl

y al

so r

isk

repo

rtin

g)-R

isk

com

mit

tee

(may

als

o be

allo

cate

d to

this

com

mit

tee

inst

ead

of a

udit

)

Sen

ior

Man

agem

ent

Ris

k U

nit

-Dev

elop

s bu

sine

ss p

lans

and

targ

ets

(P&

L, g

row

th, r

isk,

etc

.) th

at im

plem

ent f

irm

’s b

usin

ess

stra

tegy

-App

rove

s bu

sine

ss p

lans

and

targ

ets

(inc

lud

ing

P&L

risk

tole

ranc

es)

for

ind

ivid

ual b

usin

ess

lines

and

trad

ing

unit

s-E

stab

lishe

s po

licy

-Ens

ures

per

form

ance

-Mon

itor

s co

mpl

ianc

e w

ith

risk

gui

del

ines

-Man

ages

ris

k an

d v

alua

tion

com

mit

tees

Op

erat

ion

s/M

idd

le O

ffic

e

-Boo

ks a

nd s

ettl

es tr

ades

-Rec

onci

les

posi

tion

s be

twee

n fr

ont a

nd b

ack

offi

ce a

s w

ell a

s be

twee

n f

irm

and

cou

nter

part

ies

-Pre

pare

s an

d d

ecom

pose

s d

aily

P&

L-P

repa

res

dai

ly o

r ot

her

freq

uenc

y ri

sk r

epor

ts-P

rovi

des

ind

epen

den

t mar

k to

mar

ket

Trad

ing

Roo

m a

nd

Bu

sin

ess

Lin

e M

anag

emen

t

-Man

ages

trad

ing

or o

ther

b

usin

ess

that

gen

erat

es P

&L

and

ris

k ex

posu

re-E

nsur

es ti

mel

y, a

ccur

ate,

and

c

ompl

ete

dea

l cap

ture

or

othe

r r

ecor

ds

of b

usin

ess

acti

vity

-Sig

ns o

ff o

n of

fici

al P

&L

Spec

s ri

sk r

epor

tsjo

intl

y w

ith

trad

ing

des

k an

d m

onit

ors

com

plia

nce

wit

h lim

its

Spec

s an

d d

evel

ops

risk

rep

orts

Ris

k re

port

ing

Ad

vise

s bo

ard

and

sen

ior

man

agem

ent

on r

isk

issu

es a

nd w

orks

wit

h se

nior

man

agem

ent o

n m

onit

orin

g co

mpl

ianc

ew

ith

risk

gui

del

ines

-Dev

elop

s d

etai

led

ris

k po

licie

s an

d

gui

del

ines

that

impl

emen

t ris

k to

lera

nces

d

efin

ed b

y bo

ard

and

sen

ior

man

agem

ent

-Spe

cs a

nd d

evel

ops

det

aile

d r

isk

repo

rts

-Ens

ures

inte

grit

y of

ris

k re

port

ing

-Sup

port

s al

l lev

els

of th

e fi

rm in

u

nder

stan

din

g an

d a

naly

zing

ris

k-P

rovi

des

boa

rd a

nd s

enio

r m

anag

emen

t w

ith

ind

epen

den

t vie

w o

n ri

sk-S

uppo

rts

risk

com

mit

tee

proc

ess

-Tog

ethe

r w

ith

fina

nce,

eva

luat

es a

nd

che

cks

mod

els,

sys

tem

s, s

prea

dsh

eets

Fin

ance

Un

it

Spec

s an

d d

evel

ops

P&L

repo

rts

P&L

repo

rtin

g

Ad

vise

s bo

ard

and

sen

ior

man

agem

ent

on P

&L

and

acc

ount

ing

issu

es

-Dev

elop

s va

luat

ion

and

fina

nce

polic

y-E

nsur

es in

tegr

ity

of P

&L

-Sup

port

s al

l lev

els

of th

e fi

rm in

u

nder

stan

din

g an

d a

naly

zing

P&

L,

acc

ount

ing,

aud

it, a

nd o

ther

fina

nce

issu

es-S

uppo

rts

busi

ness

pla

nnin

g pr

oces

s

-Man

ages

trad

ing

or o

ther

bus

ines

s th

at g

ener

ates

P&

L

FIGU

RE3.4

FunctionsandResponsibilitiesforRiskandP&L

88

Page 108: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C03 02/15/2012 9:52:37 Page 89

aspects of the organization devolves from the board of directors to seniormanagement (the CEO and senior management committee) and eventuallydown to individual trading units and business lines. The remaining keyitems are as follows:

& Finance unit: Develops valuation policy, ensures integrity of P&L,advises board and senior management on P&L and accounting issues.

& Risk unit: Develops risk policies, develops risk reports, ensures integrityof risk reports, advises board and senior management on risk issues.

& Operations/middle office: Books and settles trades, prepares P&L and riskreports, and delivers P&L and risk reports throughout the organization.

This structure gives primary responsibility for managing risk to themanagers who have the authority and responsibility to make decisions. Atthe same time, it emphasizes the role of the risk unit in designing risk polic-ies and advising all levels of the organization on risk matters, from theboard down through individual business units. The responsibility for actu-ally running reports, both P&L and risk reports, is given to the operations/middle office group. Risk and P&L reporting are so closely linked that itmakes sense to have one operational group responsible for both, instead offinance producing one set (P&L) and risk producing another (risk).

The board and senior managers should rely on the risk unit for adviceand direction, but the board and senior management must take responsibil-ity for being informed and educated about risk. It is also important to un-derstand that the risk unit’s role of advising the board and seniormanagement includes the responsibility to alert the board and senior man-agement when there are problems with respect to risk, just as the financeunit would with respect to profits.

One final issue to discuss is the use and implementation of limits. Therecan be a wide variety of limits. For market risk, limits may consist of restric-tions or specification of the authorized business and allowed securities to betraded; VaR limits within individual business units and overall for a portfo-lio or firm; restrictions on types of positions and maximum size of positions;concentration limits that stop traders from putting all their risk in one in-strument or one market; stop-loss limits that act as a safety valve and earlywarning system when losses start to mount; and inventory age limits thatensure examination of illiquid positions or those with unrecognized losses.For credit risk, limits may involve the allowed number of defaults before abusiness or portfolio requires special attention or controls on the alloweddownward migration of credit quality within a loan or other portfolio. Forthe overall business, there may be limits on the liquidity exposure taken onby the firm.

Managing Risk 89

Page 109: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C03 02/15/2012 9:52:37 Page 90

Limits are an important way of tying the firm’s risk appetite, articulatedat the board and senior management level, to strategies and behavior at thetrading unit or business unit level. Limits are important at the business plan-ning stage because they force managers to think carefully about the scaleand scope of a new business in terms of the level of limits and the risk areasacross which limits must be granted. Limits are important for ongoing busi-nesses for two reasons. First, they tie the business activity back to the firm’soverall risk appetite and to the decision of how to distribute the risk acrossbusiness lines. Second, limits force managers to compare periodically (say,daily, weekly, or monthly) the risk actually taken in the business with whatwas intended.

Crouhy, Galai, and Mark (2006) have a discussion of limits, andMarrison (2002, ch. 11) has a particularly clear discussion of the differenttypes of limits and principles for setting limits.

3 .5 BR I E F OVERV I EW OF REGULATORY ISSUES

Regulation is important not only because firms must operate within therules set by regulators but also because banking regulation has been a majordriver of innovation and adoption of risk management procedures at manyinstitutions. Two problems, however, make it difficult to provide a com-plete treatment here. First, it is outside my particular expertise. Second, andmore important, the topic is changing rapidly and dramatically; anythingwritten here will be quickly out of date. The response to the global financialcrisis of 2008–2009 has already changed the regulatory landscape and willcontinue to do so for many years to come. I will provide only some back-ground, with references for further exploration.

Many texts cover bank regulation, and although these treatments arenot current, they do provide background on the conceptual foundationsand history of banking regulation. Crouhy, Galai, and Mark (2006) discussbanking regulation and the Basel Accords in Chapter 3 and mid-2000s legis-lative requirements in the United States regarding corporate governance (theSarbanes-Oxley Act of 2002) in Chapter 4. Marrison (2002, ch. 23) alsocovers banking regulations.

Globally, the Basel Committee on Banking Supervision (BCBS) is theprimary multilateral regulatory forum for commercial banking. The com-mittee was established in 1974 by the central bank governors of the Groupof Ten (G-10) countries. Although the committee itself does not possess for-mal supervisory authority, it is composed of representatives from centralbanks and national banking regulators (such as the Bank of England andthe Federal Reserve Board) from 28 countries (as of 2010). The BCBS is

90 QUANTITATIVE RISK MANAGEMENT

Page 110: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C03 02/15/2012 9:52:37 Page 91

often referred to as the ‘‘BIS Committee’’ because the committee meets un-der the auspices and in the offices of the Bank for International Settlementsin Basel, Switzerland. Technically, the BIS and the Basel Committee are sep-arate. The original 1988 BCBS accord, history on the committee, valuableresearch, and current information can be found at the BIS website.21

The most important regulatory requirement for banks is in regard to capi-tal holdings. Regulatory capital is money that is available for covering un-anticipated losses. It acts as a buffer or safety net when losses occur, eitherbecause assets fall below the level of liabilities or because assets cannot beliquidated quickly. In the 1980s, global regulatory developments acceleratedbecause of concern about the level and quality of capital held by banks in dif-ferent jurisdictions, with a particular focus on the low level of available capi-tal held by Japanese banks relative to their lending portfolios. The low capitalof Japanese banks was believed to give them an unfair competitive advantage.

Although capital is the most important regulatory requirement, two dif-ficulties arise in defining regulatory capital. The first is deciding what levelof capital is sufficient. The second is defining what actually counts as capi-tal. Regarding the appropriate level of capital, the problem is determininghow much a bank might lose in adverse circumstances, which, in turn, de-pends on determining the type and amount of assets a bank holds. Neitherof these problems is easy to solve, and the issue is compounded by the neces-sity to have a set of standards that are relatively straightforward and thatcan be applied equitably across many jurisdictions using standardizedaccounting measures that are available in all countries.

Early global standards regarding assets were simple. Bank assets wereput into broad risk categories, providing guidance as to the amount of capi-tal that had to be reserved against the possibility that the asset would beimpaired. Some assets were counted at 100 percent of face value (for exam-ple, a loan to a private company, which was considered to be at risk for thewhole of the loan amount), and others were given a lower risk weighting(for example, zero percent for cash because cash has no credit risk and isimmediately available or 50 percent for housing mortgages). All assets wereadded up (taking the appropriate risk weighting into account), and thesewere the bank’s total risk-weighted assets. Banks were then required tohold capital equal to a percentage of the risk-weighted assets.

Defining the capital is where the second difficulty arises because defin-ing exactly what counts as capital, and how good that capital is, can behard. It is widely accepted that equity and reserves are the highest qualityform of capital. Equity and reserves—investment in the business provided

21 See www.bis.org/bcbs.

Managing Risk 91

Page 111: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C03 02/15/2012 9:52:37 Page 92

by outside investors or retained earnings that will disappear in the case oflosses—clearly provide a buffer against losses. Other sources of capital—say, undeclared profits—may not be available to cover losses in the samemanner and thus may not provide as good a buffer.

Much of the development of global regulation since the 1980s hasfocused on these three aspects: first, which assets contribute how much torisk-weighted assets; second, what is the appropriate capital ratio; and,third, what counts as capital.

Originally, only the credit risk of assets was taken into account, with noinclusion of market risk (price risk from sources other than default, such asthe overall movement of interest rates). New standards published in 1996and implemented in 1998 sought to include market risk. The rules for riskweighting of assets, however, were still quite crude. The so-called Basel IIrules published in 2004 sought to update capital adequacy standards byproviding more flexibility but also more precision in the ways that the totalrisk of assets and total capital are calculated. The details are less importantthan recognizing that there has been a process for trying to improve howcapital requirements are calculated.

The global financial crisis of 2008–2009 highlighted deficiencies in theglobal regulatory framework, and regulators have responded with Basel III.The process started with a broad framework published in September 2009and has continued through 2011. Focus has expanded beyond bank-levelregulation (setting bank-level capital requirements, for example) to manag-ing system-wide risks, so-called macroprudential regulation.

3 .6 MANAG ING THE UNANT I C I PAT ED

The ultimate goal for risk management is to build a robust yet flexible orga-nization and set of processes. We need to recognize that quantitative riskmeasurement tools often fail to capture just those unanticipated events thatpose the most risk to an organization. The art of risk management is inbuilding a culture and organization that can respond to and withstand theseunanticipated events.

Managing risk for crises, tail events, or disasters requires combining alltypes of risk—market risk, credit risk, operational risk, liquidity risk, andothers. Generally, crises or disasters result from the confluence of multipleevents and causes. Examples are the collapse of Barings in 1995 (and alsothe same firm’s collapse in 1890) and the Soci�et�e G�en�erale trading loss inJanuary 2008.

Risk management is about managing all types of risk together—build-ing a flexible and robust process and organization. The organization must

92 QUANTITATIVE RISK MANAGEMENT

Page 112: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C03 02/15/2012 9:52:37 Page 93

have the flexibility to identify and respond to risks that were not importantor recognized in the past and the robustness to withstand unforeseen cir-cumstances. Importantly, it also must incorporate the ability to capitalizeon new opportunities.

Examining risk and risk management in other arenas can provide usefulinsights and comparisons: insight into the difference between measuring andmanaging risk and comparison with methods for managing risk. Consider therisks in ski mountaineering or backcountry skiing, of which there are many.There is the risk of injury in the wilderness as well as the risk of encounteringa crevasse, icefall, or rockfall—as with any mountaineering—but one of theprimary risks is exposure to avalanches. Avalanches are catastrophic eventsthat are virtually impossible to forecast with precision or detail.

Ski mountaineering risks and rewards have many parallels with finan-cial risks and rewards. Participating in the financial markets can be reward-ing and lucrative; ski mountaineering can be highly enjoyable, combiningthe challenge of climbing big mountains with the thrill of downhill skiing—all in a beautiful wilderness environment. Financial markets are difficult topredict, and it can be all too easy to take on exposure that suddenly turnsbad and leads to ruinous losses; avalanches are also hard to predict, and itis all too easy to stray onto avalanche terrain and trigger a deadly slide.

Managing avalanche risk has a few basic components, and these com-ponents have close parallels in managing financial risk:

Learning about avalanches in general—When and how do they occur?22

The analogy in the financial world would be gaining expertise in anew financial market, product, or activity before jumping in.

Learning about specific conditions on a particular day and basing deci-sions on this information—First, is today a high or low avalancherisk day? Then, using this information combined with one’s own orthe group’s risk tolerance, one must decide whether to go out. Infinancial risk management, this component would be analogous tolearning the specific exposures in the portfolio and then decidingwhether to continue, expand, or contract the activity.

Creating damage control strategies—What processes and procedureswill mitigate the consequences of disaster when and if it strikes?For example, backcountry skiers should go in a group with everymember carrying the tools for group self-rescue—a beacon, probe,

22A common problem for beginner backcountry skiers is ignorance of the risks theyare taking. One day, there may be little risk from avalanche and another day, greatexposure, but in neither case does the beginner even know that she is exposed.

Managing Risk 93

Page 113: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C03 02/15/2012 9:52:38 Page 94

and shovel. An avalanche beacon is a small radio transceiver thatcan be used by group members who are not buried to locate a bur-ied companion, and the probe and shovel are necessary to dig thecompanion out. A beacon reduces the consequences of beingcaught and buried by an avalanche: Having a beacon gives a rea-sonable chance, maybe 50 to 80 percent, of being recovered alive;without a beacon, the chance is effectively zero. Also, safe travelrituals can minimize the effect of an avalanche if it does occur.These damage control strategies are the final component of manag-ing avalanche risk. For financial risk management, this componentis analogous to building a robust and flexible organization that caneffectively respond to unexpected shocks.

The comparison with backcountry travel in avalanche terrain highlightssome important issues that carry over to financial risk management. First isthe importance of knowledge and attention to quantitative measurement.Veteran backcountry skiers spend time and effort learning about generaland specific conditions and pay considerable attention to quantitative de-tails on weather, snowpack, and so forth. (Those who do not take the timeto do so tend not to grow into veterans.) Managers in the financial industryshould also spend time and effort to learn quantitative techniques and thenuse the information acquired with those tools.

Second is the importance of using the knowledge to make specific deci-sions, combining quantitative knowledge with experience, judgment, andpeople skills. In almost all avalanche accidents, the avalanche is triggeredby the victim or a member of her party. Avalanche accidents usually resultfrom explicit or implicit decisions made by skiers. Decision making requiresskill and judgment and the management of one’s own and others’ emotionsand behavior. Group dynamics are one of the most important issues inbackcountry decision making. The same is true in managing financial risk.Quantitative measurement is valuable but must be put to good use in mak-ing informed decisions. Financial accidents generally do not simply occurbut result from implicit or explicit decisions made by managers. Managersmust combine the quantitative information and knowledge with experience,judgment, and people skills.

Third, both avalanches and financial accidents or crises are tail events—that is, they happen rarely and the exact timing, size, and location cannot bepredicted with any degree of certainty. Nonetheless, the conditions thatproduce events and the distribution of events are amenable to study. Onecan say with some confidence that certain situations are more likely to gen-erate an event than others. (A 38-degree slope the day after a two-footsnowfall is likely to avalanche, and for financial events, a firm with

94 QUANTITATIVE RISK MANAGEMENT

Page 114: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C03 02/15/2012 9:52:38 Page 95

$100 million of S&P 500 exposure is more likely to have severe losses thana firm with $10 million of less-risky 10-year bonds.)

Finally, there is an apparent paradox that appears in dealing with bothavalanches and financial accidents: With better measurement and manage-ment of risk, objective exposure may actually increase. As skiers acquiremore skill and tools to manage avalanche risk, they often take on more ob-jective exposure. The analogy in the financial arena is that a firm that is bet-ter able to measure and manage the risks it faces may take on greaterobjective exposure, undertaking trades and activities that it would shyaway from undertaking in the absence of such tools and skills.

Upon further consideration, however, this is not paradoxical at all. Askier without knowledge or damage control strategies should take little ob-jective exposure; he should go out only on low-risk days and then only onmoderate slopes. Doing so is safe but not very much fun because steep slopesin fresh powder are the most exciting. With knowledge and damage controlstrategies, a skier will take more objective exposure—go out more often, inhigher risk conditions, and on steeper slopes. Going out in higher risk condi-tions and on steeper slopes means taking on more objective danger, but withproper knowledge, experience, recovery tools, and decision making, the skiercan reduce the risk of getting caught in an avalanche or other adverse situa-tions and also reduce the consequences if he does get caught. Most impor-tant, the steeper slopes and better snow conditions mean better skiing and abig increase in utility, and with proper management of the risks, it can beaccomplished without a disproportionate increase in adverse consequences.

Similarly, a financial firm that can better measure, control, and respondto risks may be able to undertake activities that have both greater profitpotential and greater objective exposure without facing a disproportionateincrease in the probability of losses.

Investment management always trades off risk and return. Managingrisk is not minimizing risk but rather managing the trade-off between riskand return. Good risk management allows the following possibilities:

& Same return with lower risk& Higher return with same risk

Generally, the result will be some of both—higher return and lowerrisk. But in some situations, the objective exposure increases. For a financialfirm, internal management of exposures might be improved in such a waythat larger positions could be taken on with the same probability of loss(more exposure leading to the same risk). This might come about, say, by amore timely reporting of positions and exposures so that better informationon portfolio exposures is made available, allowing better management of

Managing Risk 95

Page 115: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C03 02/15/2012 9:52:38 Page 96

portfolio diversification. The result would be a decrease in risk in the senseof the likelihood of loss or the impact of losses on the firm but an increase inrisk in the sense of larger individual positions and larger profit potential.

This increase in exposure with increased risk management sophistica-tion should not really be surprising. It is simply part of the realization thatmanaging risk goes hand in hand with managing profits and returns. Riskmanagement is not about minimizing risk but, rather, about optimizing thetrade-off between risk and return.

Avalanches and financial accidents differ, however, in two importantrespects. First is the frequency of events. Avalanches occur frequently—many, many times during a season—so that veteran backcountry travelers(those who know enough and wish to survive) are constantly reminded thatavalanches do occur. In contrast, severe financial events are spaced yearsapart; individual and collective memory thus fades, leading to complacencyand denial.

Second is the asymmetry of payoffs. The penalty for a mistake in ava-lanche terrain is injury or death; the penalty in financial markets is losingone’s job. The reward on the upside in financial markets can be quite high,so the asymmetry—substantial reward and modest penalty—creates incen-tive problems.

Maybe the most important lesson to learn from comparing financialrisk with avalanche risk is the importance of the ‘‘human factor’’: the con-fluence of emotion, group dynamics, difficult decision making under uncer-tainty, and other factors that we humans are always subject to. The finaland most important chapter in the popular avalanche text Staying Alive inAvalanche Terrain (Tremper 2008) is simply titled ‘‘The Human Factor.’’ Ininvestigating accident after accident, avalanche professionals have foundthat human decision making was critical: victims either did not notice vitalclues or, as is often the case, ignored important flags.

Tremper explains:

There are two kinds of avalanche accidents. First, an estimatedtwo-thirds of fatalities are caused by simple ignorance, and througheducation, ignorance is relatively easy to cure. The second kind ofaccident is the subject of this chapter—when the victim(s) knewabout the hazard but proceeded anyway. They either simply didn’tnotice the problem, or more commonly, they overestimated theirability to deal with it. . . . Smart people regularly do stupid things.(p. 279)

Exactly the same holds for financial accidents and disasters. Ignoranceis relatively easy to cure. The goal of quantitative risk measurement, and the

96 QUANTITATIVE RISK MANAGEMENT

Page 116: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C03 02/15/2012 9:52:38 Page 97

subject of the balance of this book, is to educate and inform: to cure igno-rance. Ignorance may be caused by a lack of understanding and education,and it is also caused by a lack of information and data—the inability to mea-sure what is happening in a firm. Risk measurement is aimed at addressingthese problems. As such, risk measurement has huge benefits. The fact thattwo-thirds of avalanche fatalities are the result of ignorance probably carriesover to the financial arena: Many financial accidents (as we will see in Chap-ter 4) result from simple mistakes, lack of knowledge, misinformation, orlack of data—in short, financial ignorance that can be cured.

But, as with avalanches, there is a second kind of financial accident—those that are the result of the human factor. Making decisions under uncer-tainty is hard. Thinking about uncertainty is difficult. Group dynamics, ego,and outside pressures all conspire to cloud our judgment. To paraphraseTremper, we should be able to practice evidence-based decision making andcritically analyze the facts. We should arrive at the right decision automati-cally if we just have enough information. In reality, it often works out oth-erwise. Information, education, data—alone these are not sufficient, whichbrings us back to risk management. Risk management is managing people,managing process, managing data. It is also about managing ourselves—managing our ego, our arrogance, our stubbornness, our mistakes. It is notabout fancy quantitative techniques but about making good decisions in theface of uncertainty, scanty information, and competing demands.

Tremper’s chapter on the human factor has interesting ideas, manytaken from other areas that deal with risky decision making. One point isthe importance of regular and accurate feedback, which is relatively easyfor avalanches because avalanches occur regularly and publicly. It is moredifficult for financial disasters because they occur less frequently and lesspublicly. Nonetheless, feedback is important and reminds us that things canand do go wrong. Examples of financial disasters can help us be a little morehumble in the face of events we cannot control.

A second area Tremper focuses on is the mental shortcuts or heuristicsthat we often use in making decisions and how these can lead us astray. Thispoint is related to the issue of heuristics and cognitive biases in probabilisticthinking discussed in Chapter 2 of this text. The heuristics discussed inChapter 2 are related more particularly to the assessment of probabilities,whereas the heuristics here can better be thought of as decision-makingshortcuts that often lead us toward errors.

The most important of these heuristics, which carry over naturally tofinancial risk taking, are as follows:

& Familiarity: We feel more comfortable with what is familiar, which canbias our decision making even in the face of objective evidence. This

Managing Risk 97

Page 117: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C03 02/15/2012 9:52:38 Page 98

tendency is particularly a problem when disasters occur infrequentlybecause we can become lulled into thinking that because nothing badhas happened yet, it is unlikely that it will. Tremper points out thatsnow is stable about 95 percent of the time. If we ski a particular sloperegularly, it will feel familiar, but we probably have not seen it when itis cranky. The slope will feel familiar, we will feel that we know it well,but that does not make it any less dangerous.

& Commitment: When we are committed to a goal, it is hard to change inthe presence of new evidence; indeed, it is sometimes even hard to rec-ognize that there is new evidence. Success in finance requires dedicationand perseverance, commitment to goals, and optimism. But commit-ment can also blind us to changing circumstances. The balance betweenpersevering to achieve existing goals and responding to changing cir-cumstances is difficult.

& Social proof or the herding instinct: We look to others for clues to ap-propriate behavior and tend to follow a crowd. This phenomenon hastwo components. The first is related to the problem of familiarity justdiscussed. We often look to the experience of others to judge the safetyand profitability of unknown activities. When others are doing some-thing and not suffering untoward consequences, we gain confidencethat it is safe, sometimes even against our better judgment. The secondcomponent is the pressure not to be left behind. When everyone else ismaking money, it is hard to resist, even if one should know better. IsaacNewton offers a famous example: He invested relatively early in theSouth Sea Bubble but sold out (on April 20, 1720, at a profit), statingthat he ‘‘can calculate the motions of the heavenly bodies, but not themadness of people.’’ Unfortunately, he was subsequently caught in themania during the summer and lost far more than his original profit.23

& Belief and belief inertia: We often miss evidence that is contrary to ourbeliefs, and our beliefs change slowly in response to new evidence. Thispoint is best summed up by a quote from Josh Billings: ‘‘It ain’t so muchthe things we don’t know that get us into trouble. It’s the things weknow that just ain’t so.’’

Unfortunately, decision making is hard. It is hard whether the decisionsinvolve avalanches, medical diagnoses, or risk management in a financialfirm. There is no way to avoid this problem. Facts, education, and carefulthinking are all necessary for good decision making, but unfortunately, theyare not sufficient.

23 See Kindleberger (1989, 38).

98 QUANTITATIVE RISK MANAGEMENT

Page 118: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C03 02/15/2012 9:52:38 Page 99

3 .7 CONCLUS I ON

Quantitative risk measurement as discussed in this book must take its placeas a component of standard business practice—a day-in-day-out activityrather than esoteric and left to a coterie of rocket scientists. Risk manage-ment must be the responsibility of anyone who contributes to the profit ofthe firm. Risk tools, good processes, infrastructure, all of these add to pru-dent business management. In this sense, quantitative risk measurementshould be treated just like accounting or market research—an activity andset of tools integral to managing the business.

We need to recognize that managing risk, like managing any aspect ofbusiness, is hard. There are no easy answers. Nonetheless I will share onelast thought. The task of managing risk is made easier by having a well-planned strategy. A good risk management strategy is simple to state:

& Learn about the risks in general; learn about the business and thepeople

& Learn about specific exposures and risks; learn about the details of theportfolio

& Manage people, process, organization; focus on group dynamics, thehuman factor

& Implement damage control strategies to minimize the impact when andif disaster strikes

The problem, of course, is that this strategy may be easy to state but it isfiendishly difficult to implement.

Managing Risk 99

Page 119: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C03 02/15/2012 9:52:38 Page 100

Page 120: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C04 02/15/2012 11:20:50 Page 101

CHAPTER 4Financial Risk Events

S tories of financial disasters hold a certain unseemly interest, even provid-ing an element of schadenfreude for those in the financial markets.

Nonetheless, there are real and substantive benefits to telling and hearing sto-ries of financial disaster. First is the value of regular feedback on the size,impact, and frequency of financial incidents. This feedback helps to remindus that things can go badly; importantly, it can remind us during good times,when we tend to forget past disasters and think that nothing bad can possiblyhappen. This effect helps protect against what Andrew Haldane, head offinancial stability at the Bank of England, has described as ‘‘disaster myo-pia’’: the tendency for the memory of disasters to fade with time.1 It is the‘‘regular accurate feedback’’ that Tremper recommends as necessary for goodavalanche decision making. It also serves ‘‘pour encourager les autres’’—toencourage those who have not suffered disaster to behave responsibly.2

The second benefit is very practical: learning how and why disastersoccur. We learn through mistakes, but mistakes are costly. In finance, a mis-take can lead to losing a job or bankruptcy; in avalanches and climbing, amistake can lead to injury or death. As Mary Yates, the widow of a profes-sional avalanche forecaster, said, ‘‘We are imperfect beings. No matterwhat you know or how you operate 95 percent of your life, you’re not aperfect person. Sometimes these imperfections have big consequences.’’3

1 See Valencia (2010).2 The full phrase from Voltaire’s Candide is ‘‘Dans ce pays-ci, il est bon de tuer detemps en temps un amiral pour encourager les autres.’’ (‘‘In this country [England],it is wise to kill an admiral from time to time to encourage the others.’’) The originalreference was to the execution of Admiral John Byng in 1757. It is used nowadays torefer to punishment or execution whose primary purpose is to set an example, with-out close regard to actual culpability.3 From Tremper (2008, 279). Mary Yates’s husband, along with three others, waskilled in an avalanche they triggered in the La Sal Mountains of southern Utah.

101

Page 121: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C04 02/15/2012 11:20:50 Page 102

Learning from mistakes can help you identify when and how to make betterdecisions, and studying others’ mistakes can reduce the cost of learning. Ithink this is an important reason why avalanche accident reports are one ofthe most popular sections of avalanche websites and why the AmericanAlpine Club’s annual Accidents in North American Mountaineering isperennially popular. Yes, there is a voyeuristic appeal, but reviewing others’mistakes imparts invaluable lessons on what to do and what not to do at farlower cost than making the mistakes oneself.

4 .1 SYSTEM IC VERSUS I D I OSYNCRAT I C R I SK

As discussed in Chapter 1, an important distinction exists between idiosyn-cratic risk and systemic risk. Idiosyncratic risk arises from within a firm andis generally under the control of the firm and its managers. Systemic risk isshared across firms and is often the result of misplaced government inter-vention, inappropriate economic policies, or misaligned macroeconomicincentives.

The distinction between idiosyncratic and systemic risks is importantbecause in the aftermath of a systemic crisis, such as that of 2007–2009,they often become conflated in discussions of the crisis. Overall, this bookfocuses on idiosyncratic risk, but this chapter discusses examples ofboth idiosyncratic and systemic risk. We will see that systemic risk hasbeen and continues to be a feature of banking and finance for both devel-oped and developing economies. Importantly, the costs of systemic eventsdwarf those of idiosyncratic events by orders of magnitude. From a societaland macroeconomic perspective, systemic risk events are by far themore important.

The distinction between idiosyncratic and systemic disasters is also im-portant because the sources and solutions for the two are quite different.The tools and techniques in this book are directed toward measuring, man-aging, and mitigating idiosyncratic risk but are largely ineffective againstsystemic risk. Identifying and measuring systemic risk resides more in therealm of macroeconomics than in quantitative finance. An analogy mightbe useful. Learning to swim is an effective individual strategy to mitigatedrowning risk for someone at the town pool or visiting the beach. But forsomeone on the Titanic, the ability to swim was useful but not sufficient. Asystemic solution including monitoring iceberg flows, having an adequatenumber of lifeboats and life belts on the ship, and arranging rescue bynearby ships was necessary (but sadly missing for the Titanic). Similarly,when macroeconomic imbalances alter costs, rewards, and incentives, an

102 QUANTITATIVE RISK MANAGEMENT

Page 122: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C04 02/15/2012 11:20:50 Page 103

individual firm’s risk management actions will not solve the macro-economic problems.4

4 .2 I D I OSYNCRAT I C F I NANC IA L EV ENTS

Financial and trading disasters are often discussed under the rubric ‘‘roguetrading.’’ Like many myths, this one contains some truth, but only partialtruth. We will see, through examining a variety of events, that many finan-cial disasters are not characterized by rogue trading. Trading disasters occurfor a variety of reasons. Sometimes the cause is a rogue trader, as in the caseof Barings Bank’s 1995 collapse or AIB/Allfirst Financial’s losses, but manyevents have resulted from legitimate trading activity gone wrong or a com-mercial or hedging activity that developed into outright speculation.

Table 4.1 shows a list of financial events over the years, focusing onevents resulting from losses caused by trading in financial markets. It doesnot cover incidents that are primarily fraudulent rather than trading related,so it does not include Bernard Madoff’s fraud. The list is long and, from myexperience, reasonably comprehensive regarding the types of financial disas-ters, but it is not complete. The list clearly does not include events that arenot publicly reported, and many fund managers, family trusts, and hedgefunds are secretive and loath to reveal losses. For present purposes,Table 4.1 is sufficient; it both shows the scope of losses and includes lossesfrom a wide variety of sources.

Table 4.1 includes few entries relating to the 2008–2009 crisis, and forthis reason, it may seem out of date. In fact, the absence of recent events isintentional because Table 4.1 is intended to focus on idiosyncratic tradingdisasters and not systemic or macroeconomic financial crises. There havebeen huge losses across the global financial system relating to the recentfinancial crisis, but these losses are generally associated with the systemicfinancial crisis and are not purely idiosyncratic risk events. To focus more

4Regarding the risks of systemic events, the story of Goldman Sachs provides a use-ful cautionary tale. As related in Nocera (2009), during 2007 Goldman did not suf-fer the kinds of losses on mortgage-backed securities that other firms did. The reasonwas that Goldman had the good sense (and good luck) to identify that there wererisks in the mortgage market that it was not comfortable with. As a result, Goldmanreduced some mortgage exposures and hedged others. Note, however, that althoughGoldman did not suffer losses on the scale that Bear Stearns, Merrill Lynch, andLehman Brothers did during the crisis, it still suffered in the general collapse. Ironi-cally, Goldman was later pilloried in the U.S. Congress for shorting the mortgagemarket, the very action that mitigated its losses and that prudent idiosyncratic riskmanagement principles would recommend.

Financial Risk Events 103

Page 123: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C04 02/15/2012 11:20:51 Page 104

TABLE4.1

TradingLosses

CompanyName

Original

Currency

Nominal

(billion)

USD

Nominal

(billion)

Loss

2007

(billion)

Loss

Relativeto

2007GDP

(billion)

Year

of

Loss

Instrument

Long-Term

Capital

Managem

ent

USD

4.60

$4.60

$5.85

$7.36

1998

Interestrate

andequity

derivatives

Soci� et� eG� en� erale

EUR4.90

7.22

6.95

7.03

2008

Europeanindex

futures

Amaranth

Advisors

USD

6.50

6.50

6.69

6.83

2006

Gasfutures

SumitomoCorporation

JPY285.00

2.62

3.46

4.71

1996

Copper

futures

OrangeCounty

USD

1.81

1.81

2.53

3.60

1994

Interestrate

derivatives

ShowaShellSekiyu

JPY166.00

1.49

2.14

3.16

1993

FX

trading

Kashim

aOil

JPY153.00

1.50

2.09

2.98

1994

FX

trading

Metallgesellschaft

USD

1.30

1.30

1.87

2.74

1993

Oilfutures

BaringsBank

GBP0.83

1.31

1.78

2.48

1995

Nikkeifutures

AracruzCelulose

BRL4.62

2.52

2.43

2.46

2008

FX

speculation

Daiw

aBank

USD

1.10

1.10

1.50

2.09

1995

Bonds

CITIC

Pacific

HKD

14.70

1.89

1.82

1.84

2008

FX

trading

BAW

AG

EUR1.40

1.29

1.56

1.83

2000

FX

trading

BankhausHerstatt

DEM

0.47

0.18

0.76

1.71

1974

FX

trading

UnionBankofSwitzerland

CHF1.40

0.97

1.23

1.55

1998

Equityderivatives

Askin

CapitalM

anagem

ent

USD

0.60

0.60

0.84

1.19

1994

Mortgage-backed

securities

MorganGrenfell&

Co.

GBP0.40

0.66

0.85

1.11

1997

Shares

GroupeCaisse

d’Epargne

EUR0.75

1.10

1.06

1.08

2008

Derivatives

Sadia

BRL2.00

1.09

1.05

1.06

2008

FX

speculation

AIB/AllfirstFinancial

USD

0.69

0.69

0.80

0.91

2002

FX

options

State

ofW

estVirginia

USD

0.28

0.28

0.51

0.83

1987

Fixed-incomeandinterestrate

derivatives

MerrillLynch

USD

0.28

0.28

0.51

0.83

1987

Mortgage(IO

andPO

a)trading

WestLB

EUR0.60

0.82

0.82

0.82

2007

Commonandpreferred

shares

104

Page 124: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C04 02/15/2012 11:20:51 Page 105

ChinaAviationOil

(Singapore)

USD

0.55

0.55

0.60

0.65

2004

Oilfuturesandoptions

BankofM

ontreal

CAD

0.68

0.64

0.64

0.64

2007

Naturalgasderivatives

ManhattanInvestm

ent

Fund

USD

0.40

0.40

0.48

0.57

2000

ShortIT

stocksduringthe

Internet

bubble

HypoGroupAlpeAdria

EUR0.30

0.37

0.41

0.44

2004

FX

trading

Codelco

USD

0.21

0.21

0.30

0.44

1993

Copper

futures

Dexia

Bank

EUR0.30

0.27

0.31

0.37

2001

Corporate

bonds

NationalAustraliaBank

AUD

0.36

0.31

0.34

0.36

2004

FX

trading

Calyon

EUR0.25

0.34

0.34

0.34

2007

Creditderivatives

Procter

&Gamble

USD

0.16

0.16

0.22

0.31

1994

Interestrate

derivatives

NatW

estM

arkets

GBP0.09

0.15

0.19

0.25

1997

Interestrate

options

Kidder,Peabody&

Co.

USD

0.08

0.08

0.10

0.15

1994

Governmentbonds

MFGlobalHoldings

USD

0.14

0.14

0.13

0.14

2008

Wheatfutures

Notes:Derived

from

alist

oftradinglosses

thatoriginatedonW

ikipedia,withcalculations,additions,andverificationfrom

pub-

lished

reportsbytheauthor.‘‘USD

Nominal’’istheoriginalcurrency

convertedto

U.S.dollars

attheexchangerate

fortheyear

listed

as‘‘YearofLoss’’usingtheannualexchangerate

from

ForeignExchan

geRates

(Annual),Federal

Reserve

StatisticalRelease

G.5A,available

atwww.federalreserve.gov/releases/g5a/.The‘‘Loss

2007’’isthedollarnominalconvertedto

2007dollars

using

theannualaverageCPIforthe‘‘YearofLoss.’’The‘‘Loss

Relativeto

2007GDP’’isthedollarnominalloss

convertedto

a2007

amountusingthechangein

U.S.nominalGDP.Thisadjustsforboth

inflationand,roughly,growth

intheeconomy.Note

thatthe

‘‘YearofLoss’’isaroughestimate

oftheyearoftheloss;somelosses

wereaccumulatedover

manyyears,so

theconversionsto

U.S.

nominaland2007equivalents

are

only

approxim

ate.Losses

associatedwiththesystem

icfinancialcrisisof2008–2009havebeen

excluded.AUD

¼Australiandollar,BRL¼

Brazilianreal,CAD

¼Canadiandollar,CHF¼

Swissfranc,

DEM

¼Germanmark

(replacedbytheeuro),EUR¼

euro,GBP¼

British

pound,HKD

¼HongKongdollar,JPY¼

Japaneseyen,USD

¼U.S.dollar.

aIO

¼interestonly;PO

¼principalonly.

Source:

Sources

bycompanyare

listed

intheSupplemen

talInform

ationin

theResearchFoundationofCFA

Institute

sectionof

www.cfapubs.org.

105

Page 125: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C04 02/15/2012 11:20:51 Page 106

clearly on purely idiosyncratic events, Table 4.1 does not include most ofthe recent events. I return to the costs of systemic crises later in this chapter.

Before turning to the table itself, caveats regarding the quoted lossamounts are necessary. These are estimates, often provided by the firm thatsuffered the loss and after a malefactor has left. Reconstructing trading activ-ity after the fact is always difficult and is sometimes open to different inter-pretations. Even for simple exchange-traded instruments, it is surprisinglydifficult, and financial disasters often involve complex over-the-counter(OTC) instruments for which pricing is hard, compounded with fraud andintentionally concealed prices and trades. Different accounting and mark-to-market standards across jurisdictions mean that different events may havedifferent standards applied. Sometimes the loss that is publicly reported in-cludes restatements for prior incorrectly reported profits rather than simplythe economic loss from trading.5 Finally, a firm and the managers that havesuffered a loss may have both the motivation and the opportunity to overstateor understate the loss, saying it is larger than it really is to make predecessorslook foolish or venal and to flatter future results or smaller than it really is tominimize the culpability of incumbent managers and the damage to the firm.

One final issue regarding the amounts in Table 4.1 needs to be discussed.A dollar lost in 1974 would be equivalent to more than 1 dollar today. Infla-tion is an obvious factor; a dollar in 1974 could buy more goods or servicesthan it can today. There is also a more subtle effect. The market and theeconomy have grown over time so that a dollar in 1974, even after adjust-ment for ordinary (consumer price) inflation, represented a larger proportionof the total market or the total economy; a dollar could buy a larger propor-tion of the total goods and services produced. Table 4.1 shows both an ad-justment in the nominal amounts for inflation (using the U.S. consumer priceindex [CPI]) and a rough adjustment for the size of the economy using U.S.nominal gross domestic product (GDP) growth. This latter adjustment isonly approximate but gives a better idea of the relative importance of lossesin different years than one would get by adjusting for inflation alone.6

5Kidder, Peabody & Co.’s 1994 loss resulting from U.S. Treasury bond trading is acase in point. The loss is reported by some sources as $350 million. This amount wasactually a write-down by Kidder or Kidder’s parent, General Electric Company,which reflected both trading losses and the restatement of previously reported, butfictitious, profits. According to U.S. SEC documents, the actual loss caused by trad-ing was $75 million.6As an example, the Herstatt loss in 1974 was $180 million at the time. Adjustingfor U.S. CPI inflation (320.6 percent from 1974 to 2007) brings it to $760 million in2007. Adjusting for growth in U.S. nominal GDP (838.8 percent, which adjusts forboth inflation and growth in the economy), the loss is equivalent to roughly $1.71billion in 2007.

106 QUANTITATIVE RISK MANAGEMENT

Page 126: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C04 02/15/2012 11:20:51 Page 107

Thus, Table 4.1 shows the events, with the original currency amount,the original converted to U.S. dollars (at the average FX rate for the approx-imate year of loss), the U.S. dollar amount in 2007 dollars, and the U.S.dollar amount adjusted so that it is proportionate to 2007 U.S. nominalGDP (that is, adjusted for changes in both inflation and, roughly, the size ofthe economy). The events are sorted by the size of the loss relative to 2007nominal GDP.

Ca t egor i z a t i o n and D i scuss i o n o f L osses

Table 4.1 is interesting in itself and highlights the importance of financialdisasters over the years. The name Herstatt, for example, has entered thelanguage as a particular form of cross-currency settlement risk—that whichresults from differing times for currency transfers.7

We can, however, do more than simply admire the size of the losses inTable 4.1. We can use the events to understand more about the sources andcircumstances of financial disasters and losses. I have attempted to provideadditional information on each event, shown in Table 4.2, concerning

& Whether the event involved fraud.& If there was fraud, whether it primarily involved fraudulent trading—

that is, actively hiding trades from supervisors or accountants, creatingfalse trading entries, and so on. I mean this to be distinct from simplytrading in excess of limits, which often involves taking larger positionsthan authorized but not actively hiding that fact.

& If there was fraud, whether it was primarily to hide losses that had orig-inated from sources other than fraud. An example is Codelco, where acomputer entry led to a wrong-way-around trade that lost $30 million.Subsequent fraudulent trading appears to have been an attempt to makeback the original loss.

& Whether the underlying company or business was involved in (primar-ily) banking, finance, or investment activity.

& Whether the event involved legitimate trading, hedging, or commercialactivity that went wrong in some way. For example, Amaranth Advi-sors’ losses in natural gas futures trading were a result of Amaranth’slegitimate business activity, even if one might argue, at least in retro-spect, that the size and exact form of the position taking may have beenfoolish. As another example, Aracruz Celulose was a Brazilian pulpproducer that lost money in foreign exchange (FX) speculation. The

7Note thatHerstatt risk refers to the circumstances under which Herstatt was closedrather than the trading loss that caused Herstatt’s collapse.

Financial Risk Events 107

Page 127: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C04 02/15/2012 11:20:53 Page 108

108 QUANTITATIVE RISK MANAGEMENT

TABLE 4.2 Trading Losses, with Additional Characteristics

Company

LossRelative

to2007GDP

(billion) FraudFraudulentTrading

FraudOriginated

toCoverUp

Problem

NormalTrading,Hedging,

orCommercialActivityGoneWrong

Tradingin

Excessof

Limits

A. Involving FraudFraud ¼ Yes and Fraudulent Trading ¼ YesSoci�et�e G�en�erale $7.03 Yes Yes Special No Yes

Sumitomo Corp. 4.71 Yes Yes Yes No Yes

Barings Bank 2.48 Yes Yes No No YesDaiwa Bank 2.09 Yes Yes Yes No Yes

AIB/Allfirst Financial 0.91 Yes Yes No No YesBank of Montreal 0.64 Yes Yes No No UnknownCodelco 0.44 Yes Yes Yes No Yes

National Australia Bank 0.36 Yes Yes Yes Unknown Yes

Kidder, Peabody & Co. 0.15 Yes Yes No No No

Fraud ¼ Yes and Fraudulent Trading ¼NoShowa Shell Sekiyu 3.16 Yes No Yes Yes Unknown

Kashima Oil 2.98 Yes No Yes Yes Unknown

CITIC Pacific 1.84 Yes No Yes Yes Yes

BAWAG 1.83 Yes No Yes Yes Probably yes

Morgan Grenfell 1.11 Yes No No Yes Yes

State of West Virginia 0.83 Yes No Yes Yes No

China Aviation Oil(Singapore)

0.65 Yes No Yes Maybe Unknown

Manhattan InvestmentFund

0.57 Yes No Yes Yes No

Page 128: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C04 02/15/2012 11:20:53 Page 109

PrimaryActivityFinance

orInvesting

YearsoverwhichLosses

Accumulated

Failureto

SegregateFunctions

LaxTrading

Supervisionor

Mgmt/ControlProblem Note

Yes 2 Unknown Yes Fraud seems to have originated to hideoutsized profits.

No 13 Unknown Yes Fraud originated with off-the-bookstrading, then continued in an attempt torecover losses—apparently not forpersonal gain (apart from keeping job).

Yes 3 Yes Yes Fraud was for personal gain (higher bonus).Yes 11 Yes Yes Fraud started with small ($200,000) loss,

then continued to hide and try to recoverlosses.

Yes 5 Yes Yes Fraud was for personal gain (higher bonus).Yes 2 No Probably no Fraud was for personal gain (higher bonus).No <1 Unknown Yes Mistaken buy vs. sell led to $30 million

loss, then trader tried to recover and lostmore.

Yes 1 to 2 Unknown Unknown Fraud originated to cover an AUD5 millionloss, then losses grew.

Yes 3 No Unknown Generated fraudulent profits by takingadvantage of accounting system flaws.

No Many No No Losses were hidden for years, apparently toavoid embarrassment.

No 6 No No Losses were hidden for years, apparently toavoid embarrassment.

No 1 No Yes There was apparently fraud to cover up amistaken hedging transaction.

Yes 2 to 8 Unknown Maybe Losses hidden (fraudulently) from 2000 to2006.

Yes 2 Unknown Unknown Fraud to circumvent regulatory rules onholding concentrated position in a singlefirm. Not fraud to hide trades or forpersonal gain.

Yes <1 Unknown Unknown Losses from poor investment strategycovered up for a period, but no fraud ingenerating losses.

No 1 to 2 Unknown Probably no Speculation in oil futures and options, withfraud to hide losses from investors.

Yes 3 Unknown No Fraud to cover losses made through anotherwise legitimate strategy to shorttechnology stocks during the technologybubble.

(continued)

Financial Risk Events 109

Page 129: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C04 02/15/2012 11:20:54 Page 110

110 QUANTITATIVE RISK MANAGEMENT

Hypo Group Alpe Adria 0.44 Yes No Yes Probably UnknownNatWest Markets 0.25 Yes No Yes Yes Probably no

Fraud ¼Other and Fraudulent Trading ¼NAMF Global 0.14 Possible NA NA Yes YesDexia Bank 0.37 Unknown NA NA Yes Unknown

B. Not Involving FraudLong-Term Capital

Management$7.36 No NA NA Yes No

Amaranth Advisors 6.83 No NA NA Yes NoOrange County 3.60 No NA NA Yes No

Metallgesellschaft 2.74 No NA NA Yes NoAracruz Celulose 2.46 No NA NA Yes Unknown

Bankhaus Herstatt 1.71 No NA NA Yes Probably yesUnion Bank of Switzerland 1.55 No NA NA Yes No

Askin Capital Management 1.19 No NA NA Yes NoGroupe Caisse d’Epargne 1.08 No NA NA Yes Yes

Sadia 1.06 No NA NA Yes Unknown

Merrill Lynch 0.83 No NA NA Yes Yes

WestLB 0.82 No NA NA Yes Probably no

Calyon 0.34 No NA NA Yes Yes

Procter & Gamble 0.31 No NA NA Maybe Yes

Notes: See notes to Table 4.1. Data on additional characteristics are based on reading ofpublished reports (see the Supplemental Information in the Research Foundation of CFAInstitute section of www.cfapubs .org for sources ) and the author’ s judgment.

TABLE 4.2 (Continued )

Company

LossRelative

to2007GDP

(billion) FraudFraudulentTrading

FraudOriginated

toCoverUp

Problem

NormalTrading,Hedging,

orCommercialActivityGoneWrong

Tradingin

Excessof

Limits

Page 130: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C04 02/15/2012 11:20:54 Page 111

Yes 2 Unknown Unknown Fraud to cover losses from a currency swap.Yes 2 Partial Yes Interest rate options were mismarked,

apparently in error to start, thenfraudulently to conceal losses.

Yes <1 No Yes Trade exceeded authorized position size.Yes 1 to 2 Unknown Unknown Bond trade ‘‘ignored internal control

procedures and invested in riskyinvestments.’’

Yes <1 No No Large positions in swap spreads, equityvolatility, yield curve arbitrage, stocks,etc.

Yes <1 No No Large position in natural gas futures.Yes 1 No No County investment pool, leveraged, badly

hurt when rates rose in 1994.Yes 1 No No Hedging strategy that went wrong.No 1 No Unknown Speculative FX trading, growing out of

hedging commercial transactions.Yes 1 to 2 Unknown Yes FX speculation, possibly outside of limits.Yes 1 to 3 No Yes Mispricing of embedded options led to

losses when Japanese bank shares fell.Yes 1 No No Investment in mortgage-related products.Yes <1 No Maybe Large positions in equity futures, said to

exceed limits.No 1 No No Speculative FX trading, growing out of

hedging commercial transactions.Yes 1 No Yes Trading in mortgage IO/PO strips, partly

beyond authorized limits, caused losseswhen rates spiked up.

Yes 1 No Maybe Proprietary trading, primarily spreadsbetween common and preferred shares.

Yes <1 No Maybe Large positions in index-based CDSs, saidto be in excess of authorized limits.

No <1 No Unknown Speculation in highly leveraged swapsrelated to interest rates and FX.

PrimaryActivityFinance

orInvesting

YearsoverwhichLosses

Accumulated

Failureto

SegregateFunctions

LaxTrading

Supervisionor

Mgmt/ControlProblem Note

Financial Risk Events 111

Page 131: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C04 02/15/2012 11:20:54 Page 112

speculation seems to have started as a commercially reasonable strategyto hedge the FX exposure resulting from export earnings, a strategy thatgrew into leveraged speculation.

& Years over which the losses accumulated.& Whether there was a failure to segregate activities (particularly trading

and back-office).& Whether there was lax trading supervision or other management or

control problems.

The information shown in Table 4.2 is, to some extent, subjective. Thedata are based on a reading of published reports of the incidents and reflectmy judgment. When the exact nature or circumstance of a loss is not clearfrom reports, I have tried to note that in the table. I have used my best judg-ment in sorting events into the various categories; sources are given in theonline supplemental information so that others can make their ownassessment.

Table 4.3 lists the events, again sorted by the size of the loss relative to2007 GDP, with a longer description of each event.

Fraud

Fraud is an important distinguishing characteristic for the events listed inTable 4.1. There are 35 events in total, and 19 (54 percent) involved fraudin one way or another. Some very large losses have involved fraud (Soci�et�eG�en�erale, Barings, Sumitomo Corporation, Showa Shell Sekiyu), but by thesame token, some of the larger losses have not involved fraud (Long-TermCapital Management, Amaranth Advisors, Orange County). Panel A ofTable 4.2 shows events in which fraud appears to have been involved, andPanel B shows those for which fraud does not seem to have been important.

We usually think of fraud as motivated by personal enrichment—thecelebrated rogue trader.8 Barings might be the best-known case, in whichNick Leeson reportedly hid losing trades, inflated his group’s trading prof-its, and earned large personal bonuses. In addition to Barings, the events atAIB/Allfirst; Kidder, Peabody & Co.; and Bank of Montreal appear to haveinvolved fraud for personal gain.

Although fraud for personal enrichment jumps to mind first, it does notappear to be the most common source of fraud. I have found it useful when

8By personal gain or enrichment, I mean direct gain over and above retaining one’sjob and a more-or-less standard salary; personal enrichment would, for example,take place through a large bonus that would not have been paid absent the fraud.

112 QUANTITATIVE RISK MANAGEMENT

Page 132: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C04 02/15/2012 11:20:55 Page 113

TABLE4.3

ShortDescriptionofTradingLosses

CompanyName

Loss

Relative

to2007GDP

(billion)

Instruments

Year

of

Loss

ShortDescription

Long-Term

Capital

Managem

ent

$7.36

Interestrate

and

equity

derivatives

1998

LTCM

hadhighly

leveraged

positionsin

avarietyofmarkets

(swapspreads,equityvolatility,yield

curvearbitrage,

stocks,andso

on).Noneoftheseontheirownin

asm

aller

size

oratadifferenttimewould

havebeenbadpositionsbut

together

weretoxic.After

Russia’s1998debtdefault,large

market

moves

generatedlosses

forLTCM

.Furthermore,

because

ofthelargesize

andilliquid

nature

ofmanytrades,

LTCM

wascaughtin

aliquiditycrisis.

Soci� et� eG� en� erale

7.03

Europeanindex

futures

2008

J� eromeKervielwasatrader

inequitycash/futuresarbitrage.

Startingin

2006,heputonoutrightpositions,which

becameverylarge(upto

D49.9

billion).Bytheendof2007,

KervielhadmadeaprofitofD1.4

billion.Kervielused

fictitioustrades

tohidethesize

oftheprofits.Duringearly

2008,thepositionsstarted

losingsubstantialamounts.The

bankclosedoutpositions.M

anagerswerealleged

tohave

beenaware

ofthesize

ofthepositionsbutto

haveignored

therisk

when

thetrades

wereprofitable,butthese

allegationsweredisputed.

Amaranth

Advisors

6.83

Gasfutures

2006

Amaranth

wasahedgefundinitiallyfocusedonconvertible

arbitrage.Itsenergydesk,dominatedbyasingletrader,

undertookspreadtrades

innaturalgas—

amongthem

that

March/Aprilspreadswould

widen

(Marchupbecause

of

heatingdem

and,Aprildownwithwarm

weather

andlower

dem

and).Sim

ilartrades

hadbeenprofitablebefore.

(continued)

113

Page 133: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C04 02/15/2012 11:20:55 Page 114

TABLE4.3

(Continued

)

CompanyName

Loss

Relative

to2007GDP

(billion)

Instruments

Year

of

Loss

ShortDescription

Nonetheless,positionsize

wasverylargerelativeto

the

market

andrelativeto

Amaranth’scapital.,Thespreads

eventuallymoved

againstAmaranth,andthehedgefund

hadto

close

because

ofthesize

oflosses.

Sumitomo

Corporation

4.71

Copper

futures

1996

Tradingin

copper

byasingletrader

wascarriedoutfar

beyondauthorizedtradinglimitsover

anumber

ofyears,

withfraudulentreportingandhidingoftrades.Theoriginal

tradingapparentlystarted

in1985withoff-the-books

trading,andthen

fraudulenttradingcontinued

inan

attem

ptto

recover

originallosses.

OrangeCounty

3.60

Interestrate

derivatives

1994

RobertCitron,astreasurerofOrangeCounty,California,

managed

about$7.5

billionin

capitalandleveraged

itwith

anadditional$12.5

billionusingreverse

repos.Hebough

tbondsbutalsoexotics,such

asinverse

floaters.Basically,he

wasfundingshortandinvestinglong.W

hen

shortrates

wentupin

1994(curveinverted),thecounty

lostsubstantial

sumsandhadto

declare

bankruptcy.

ShowaShell

Sekiyu

3.16

FX

trading

1993

Speculationin

forw

ard

FX

ledto

losses

thataccumulatedover

manyyears

because

ofalack

ofmark-to-m

arket

andclear

accountingrules.Itislikely,butnotabsolutely

clearto

me,

thatthisstarted

asacommerciallyreasonablestrategyto

hedgeFX

tradereceiptsorliabilitiesandthen

grew

into

outrightspeculation.

Kashim

aOil

2.98

FX

trading

1994

Speculationin

forw

ard

FX

ledto

losses

thataccumulatedover

manyyears

because

ofalack

ofmark-to-m

arket

andclear

accountingrules.Itislikely,butnotabsolutely

clearto

me,

thatthisstarted

asacommerciallyreasonablestrategyto

114

Page 134: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C04 02/15/2012 11:20:55 Page 115

hedgeFX

tradereceiptsorliabilitiesandthen

grew

into

outrightspeculation.

Metallgesellschaft

2.74

Oilfutures

1993

Strategyto

hedgelong-datedfixed-price

oildeliverycontracts

usingshort-datedfuturesandOTCsw

aps(essentially

buyingastack

ofnear-contract

futures).Although

questionable,thestrategyisnotfatallyflawed

butprovides

only

apartialhedge.Itissubject

tobasisrisk

(ifthespread

betweentheshort-datedfuturesprice

andlong-dated

contract

price

moves),liquidityrisk

(ifthenear-contract

futuresprice

falls,generatingrealizedlosses

thatwillonly

berecouped

over

timeasthelong-term

contractsmature),

andcounterpartycreditrisk

(iflong-datedcontract

price

falls,counterpartiesmayrenegeoncontracts,generating

creditlosses).M

etallgesellschaftapparentlysuffered

primarily

from

liquidityrisk,withbasisrisk

contributing.

Differentaccountingtreatm

entofhedgegainsandlosses

betweentheUnited

StatesandGermanyalsocontributed.

BaringsBank

2.48

Nikkeifutures

1995

NickLeesonwasatrader/m

anager

whowassupposedto

be

arbitragingOsakaversusSIM

EX

futures.Hehadboth

tradingandoperationalresponsibility.Startingin

1992,he

tookoutrightpositionsandfraudulentlyhid

losses

inan

‘‘erroraccount’’88888.Reported

profitswerelargethrough

1994(w

ithtrue,offsettinglosses

hidden

intheerror

account).Positionsgrew

inthefirsttw

omonthsof1995;

losses

andconsequentmargin

callsfrom

futuresexchanges

grew

solargethatthefraudcould

notbemaintained

after

theKobeearthquakestruck,andLeesonfled

onFebruary

23,1995.Baringscollapsedunder

theweightofthelosses.

AracruzCelulose

2.46

FX

speculation

2008

Acommerciallyreasonablestrategyto

hedgeFX

tradereceipts

grew

into

alargespeculativeactivity.Thisworked

aslong

astheBRLdid

notdepreciate

substantially,butwhen

itdid

inlate

2008,thetradegeneratedlargelosses.

(continued)

115

Page 135: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C04 02/15/2012 11:20:55 Page 116

TABLE4.3

(Continued

)

CompanyName

Loss

Relative

to2007GDP

(billion)

Instruments

Year

of

Loss

ShortDescription

Daiw

aBank

2.09

Bonds

1995

Abondtrader

heldboth

tradingandback-office

responsibilitiesin

Daiw

aBank’sNew

York

branch.Over

aperiodof11years,heaccumulated$1.1

billionoflosses

(notforpersonalgain),whichhehid

byfraudulentlyselling

securities

heldin

custodyforthebankandcustomers.The

trader

confessed

onJuly

13,1995.M

anagem

entatthe

branch

wasverypoor,andseniormanagersmisledbank

examinersandregulators,both

before

theconfessionand

more

activelyafterward.Thebank’sU.S.license

was

revoked,andDaiw

awasexpelledfrom

theUnited

States.

CITIC

Pacific

1.84

FX

trading

2008

ThisHongKong–basedfirm

wasseem

ingly

attem

ptingto

hedgeaprospectiveAUD1.6

billionacquisition,butfor

reasonsIcannotdetermine,thehedgewaslevered

toAUD9

billion.Therewereclaim

softradingwithoutauthorization

andlaxsupervision.

BAW

AG

1.83

FX

trading

2000

BAWAG

wasanAustrianbankalleged

tohaveinvestedin

ahedgefund(w

ithconnectionsto

seniorbankofficials)to

speculate

infinancialmarkets(FX

inparticular).Thehedge

fundmadesubstantiallosses

inyen

FX

trades,andthebank

conspired

tohidethelosses

forroughly

sixyears.BAW

AG

wasmixed

upin

the2005Refco

fraud,whichbroughtthe

earliertradinglosses

tolight,buttheRefco

scandalappears

tohavebeenseparate

from

theseFX

losses.

116

Page 136: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C04 02/15/2012 11:20:55 Page 117

Bankhaus

Herstatt

1.71

FX

trading

1974

Herstattspeculatedin

FX

tradingandbuiltupsubstantial

losses.ThenameHerstattisnow

usedforatypeof

settlementrisk,after

thecircumstancesofHerstatt’sclosing.

Germanauthorities

closedthebankearly(N

ewYork

time)

onJune26,1974.Counterpartybankshadtransferred

DEM

moneysto

HerstattforsettlementofFX

trades,but

Herstattwasclosedbyauthorities

before

Herstatt

transferredUSD

moneysin

payment,andthecounterparties

facedlosses.Thisnearlycausedthecollapse

ofthepayment

system

.Since

then,settlementprocedureshavebeen

changed

toremovetheintradaydelayforsettlementofFX

trades.

UnionBankof

Switzerland

1.55

Equity

derivatives

1998

Theequityderivatives

tradingdeskhadverylargepositionsin

Japanesebankconvertiblepreference

shares.Itdid

not

properly

hedgeorvaluetheem

bedded

putoptions,and

when

Japanesebanksharesfellprecipitously(after

YamaichiSecurities

Companywentunder

inNovem

ber

1997),itlostlargeamounts.Thiseventisbelieved

tohave

precipitatedthemerger

ofUBSandSBCin

1998.Theloss

isoften

quotedasCHF625million,theamountUBSwrote

off

before

themerger,butitshould

alsoincludetheCHF760

millionwrite-offafter

themerger.Theequityderivatives

deskapparentlyoperatedwithoutthesamerisk

managem

entcontrolsasother

partsofthefirm

.Askin

Capital

Managem

ent

1.19

Mortgage-

backed

securities

1994

Askin

CapitalM

anagem

entinvestedin

PO

stripsofCM

Os

(collateralizedmortgage

obligations).POsare

verysensitive

torisesin

interestrate—

when

ratesrise,principal

repaymentsslow

andPOsfallin

value.In

1994,ratesrose

(continued)

117

Page 137: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C04 02/15/2012 11:20:55 Page 118

TABLE4.3

(Continued

)

CompanyName

Loss

Relative

to2007GDP

(billion)

Instruments

Year

of

Loss

ShortDescription

dramatically,prepaymentsfell,thePOslostvalue,and

Askin

wascaughtin

aliquiditycrisisandhadto

liquidate

all

funds.Thisledto

theclosure

ofAskin’shedgefunds—

GranitePartners,GraniteCorporation,andQuartzHedge

Fund—

withtheloss

ofvirtuallyallassets.

MorganGrenfell

1.11

Shares

1997

Firm

purchasedhighly

speculativestocks;somefraudinvolved

tocircumventrulesrestrictingafundholdingconcentrated

positionsin

asinglecompany.

GroupeCaisse

d’Epargne

1.08

Derivatives

2008

Theeventinvolved

tradingbyasm

allgroupofequity

derivatives

tradersataproprietary

tradingunitofCaisse

Nationaledes

Caissesd’Epargne(theholdingcompanyof

GroupeCaisse

d’Epargne).Losses

wereeventuallyreported

asD750million.Tradersweresaid

toexceed

limits.

Sadia

1.06

FX

speculation

2008

Acommerciallyreasonablestrategyto

hedgeFX

tradereceipts

grew

into

alargespeculativeactivity.Thisworked

aslong

astheBRLdid

notdepreciate

substantially,butwhen

itdid

inlate

2008,thetradegeneratedlargelosses.

AIB/Allfirst

Financial

0.91

FX

trading

2002

JohnRusnak

was

anFXtrad

erat

AlliedFinan

cial(a

U.S.

subsidiary

ofAlliedIrishBan

ks)whoaccumulated$691

millionin

losses.Heclaimed

tomak

emoney

byrunninga

largeoptionsbookthat

was

hedgedin

thecash

markets.In

1997,hestartedto

lose

money

inoutrightyenforw

ard

positionsan

dcreatedfakeoptionsto

hidethose

losses.H

eman

aged

toenterthefakeoptionsinto

theback-office

system

.Rusnak

man

ipulatedpricesusedto

valuepositionsan

dcircumventedlimits.Thefrau

dwas

notuncovereduntil2

002.

118

Page 138: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C04 02/15/2012 11:20:55 Page 119

State

ofW

est

Virginia

0.83

Fixed-income

andinterest

rate

derivatives

1987

Aconsolidatedfundthatpooledshort-term

assetsoflocal

governmentsinvestedshort-term

fundsin

long-term

bonds

withsubstantialleverage.W

hen

asharp

rise

inlong-term

ratesoccurred

inApril1987,thestrategyresulted

inlarge

losses

(30-yearTreasury

rateswentfrom

7.46percentatthe

beginningofM

archto

8.45percentattheendofApril).

Losses

werefraudulentlyhidden

andeventuallydisclosedin

Decem

ber

1988.

MerrillLynch

0.83

Mortgages

(IOs

andPOs)

trading

1987

Atrader

inmortgageIO

/PO

stripsexceeded

trading

authorizationandcreatedalargepoolofIO

sandPOs.

Merrillsold

theIO

s,butthePOswereapparently

overpriced,andM

errillheldonto

them

.(O

nApril8,

Merrillunderwrote

$925millionofstripsbutsold

only

the

IOs.Thetrader,Howard

A.Rubin,then

createdanother

$800million[beyondauthority]andagain

sold

theIO

s.Rateswentupsometim

earoundApril10.)W

hen

therate

spiked,thevalueofthePOsfell.M

errilleventuallytraded

out,takinga$275millionloss.

WestLB

0.82

Commonand

preferred

shares

2007

WestLBwasaGermanstate-runbank.Losses

werein

proprietary

trading,primarily

inspreadsbetweencommon

andpreferred

shares.(G

ainsfrom

tradingin

bondsand

currencies

partiallyoffsettheequitylosses.)Note

thatin

subsequentyears,because

ofthesystem

icfinancialcrises,

WestLBhasruninto

substantialproblemsrelatedto

investm

entsanditsloanbook.

ChinaAviation

Oil(Singapore)

0.65

Oilfuturesand

options

2004

ChinaAviationOilisaSingapore-basedcompanythathasa

monopoly

ofChina’sjetfuelmarket.M

anagersatthefirm

speculatedonmovem

entsin

theprice

ofoilandthen

triedto

hidethelosses

from

investors.

(continued)

119

Page 139: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C04 02/15/2012 11:20:55 Page 120

TABLE4.3

(Continued

)

CompanyName

Loss

Relative

to2007GDP

(billion)

Instruments

Year

of

Loss

ShortDescription

BankofM

ontreal

(BM

O)

0.64

Naturalgas

derivatives

2007

David

Lee

overvalued

BM

O’snaturalgasoptionsby

mismarkingpositionsforwhichpriceswerenotavailable.

Hecolluded

withanoutsidebroker

tohavethem

provide

thesemismarked

positionsto

thebank’srisk

managem

ent

group.

Manhattan

Investm

ent

Fund

0.57

ShortIT

stocks

duringthe

Internet

bubble

2000

MichaelBergerwas

anAustrian

investmentman

ager

(operatingin

theUnited

States)whostartedMan

hattan

InvestmentFund,ahedge

fund,in

1996.Thestrategy

was

shortingtechnology

stocks.UnfortunatelyforBerger,who

was

righ

tin

fundam

entalsbutwrongin

timing,the

technology

bubblecontinued

toinflatethrough

2000.B

y1999,trad

inglosses

had

accumulatedto

more

than

$300

million(accordingto

theU.S.SEC).Bergerforged

documents

andfraudulentlyreported

gainsto

investorsthrough

outthe

periodfrom

1996to

2000.Bergerpledgu

ilty

tofrau

din

2000butsubsequentlyfled

theUnited

States.

HypoGroupAlpe

Adria

0.44

FX

trading

2004

TherewereD300millionlosses

from

acurrency

swap

in2004

withsubsequentfrau

dto

coverthelosses.T

he2004trad

ing

losses

wereminor,however,relativeto

thelosses

inthe2008–

2009finan

cialcrisis(2009after-taxlosses:D1.6

billion).The

ban

kwas

nationalized

inDecem

ber

2009to

avoid

acollap

se.

Problemswerestillo

ngo

ingas

ofearly2010.

120

Page 140: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C04 02/15/2012 11:20:55 Page 121

Codelco

0.44

Copper

futures

1993

Trader

forChileanstate

copper

companyenteredamistaken

futuresbuyinsteadofasellinto

acomputersystem

,which

ledto

a$30millionloss.Thetrader

then

tooklarge

positionsin

copper

butalsosilver

andgold

futures,andthe

loss

grew

to$210million.

Dexia

Bank

0.37

Corporate

bonds

2001

Thereislittleinform

ationonthisevent,butapparentlyabond

trader

‘‘ignoredinternalcontrolproceduresandinvestedin

riskyinvestm

ents.’’In

anycase,thiswasovershadowed

by

losses

relatedto

municipalandbondinsurance

inthe2008–

2009financialcrisis.Losses

for2008wereD3.3

billion,and

thebankrequired

state

aid

from

Belgium,France,and

Luxem

bourg.

National

AustraliaBank

0.36

FX

trading

2004

FX

trader

lostAUD5millionin

2003andfraudulentlyclaim

edAUD37millionprofitto

cover

up.During2004,trading

(fraudulentlyconcealed)generatedatotalofAUD360

millionin

losses.

Calyon

0.34

Credit

derivatives

2007

CalyonwasaU.S.-basedsubsidiary

ofCr� editAgricole.Losses

appearto

havebeenfrom

tradingin

index-basedcredit

defaultsw

aps(C

DSs)thatweresaid

tobein

excess

ofthe

unit’sauthorizedlimits.Thetrader

involved

andfive

superiors

werefired.

Procter

&Gamble

0.31

Interestrate

derivatives

1994

Thiseventinvolved

speculationin

highly

leveraged

swaps

relatedto

interestratesandFX.

NatW

estM

arkets

0.25

Interestrate

options

1997

Initially,exchange-traded

DEM

optionsweremismarked

asa

resultofnotaccountingproperly

forthevolatility

smile,

apparentlyin

errorrather

thanfraudulently.Subsequently,

thetrader

fraudulentlymanipulatedmarksin

thesw

aption

bookto

hidetheoriginallosses.Therewaspoorsegregation

ofresponsibilities,withthetrader

supplyingatleastsomeof

theim

plied

volatility

marks.

(continued)

121

Page 141: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C04 02/15/2012 11:20:55 Page 122

TABLE4.3

(Continued

)

CompanyName

Loss

Relative

to2007GDP

(billion)

Instruments

Year

of

Loss

ShortDescription

Kidder,Peabody

&Co.

0.15

Government

bonds

1994

JosephJett,aKidder

Peabodybondtrader,generated

fraudulentprofitsbytakingadvantageofaccountingsystem

flaws.Theaccountingsystem

ignoredthedifference

betweenspotandforw

ard

prices.Atrader

could

exploitthis

problem

andgenerate

phantom

profitsbysellingU.S.

Treasury

stripsforw

ard

andbuyingthebond(reconstituting

thebond).Theloss

isoften

quotedas$350millionor$250

million,butthatisthewrite-offthatKidder

Peabodyand

GEhadto

taketo

adjustforearlierreported

phantom

profits.Therealloss

seem

sto

havebeenmore

like$75

million,accordingto

theSEC.

MFGlobal

Holdings

0.14

Wheatfutures

2008

Atrader

exceeded

authorizedpositionsize

onwheatcontracts.

Thetradeentrysystem

thatshould

haveblocked

thetrade

did

notdoso.

Notes:

See

notesto

Table

4.1.Thedescriptionoftheeven

tis

basedonthereadingofpublished

reports(see

thesupplemen

tal

inform

ationin

theResearchFoundationofCFAInstitute

sectionofwww.cfapubs.org

forsources)andtheauthor’sjudgment.

122

Page 142: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C04 02/15/2012 11:20:56 Page 123

examining the events in Table 4.2, Panel A, to consider the following classi-fications of fraud:

& Primarily involving actively fraudulent trading, divided into& Fraud for personal enrichment& Fraud to make back losses or some other motivation that is not

primarily personal enrichment& Not primarily fraudulent trading, usually to hide losses that have

occurred by other means

Before turning to an examination of specific cases, we should note animportant philosophical point regarding distinctions between differentforms of fraud. On the one hand, in the eyes of the law, in the effect onshareholders or investors or fellow workers, and in the size of losses, thereis little distinction among different motivations for fraud. The judge in theNational Australia Bank case stated it succinctly: ‘‘You and your team sawyourselves as . . . justified in your criminal conduct by asserting that yourprincipal motives were to make money for the bank (not for yourselves).That is simply no excuse.’’9 Fraud is fraud, and there is no excuse.

On the other hand, to combat and protect against fraud, we need amore nuanced approach. Understanding the origin of and motivation forfraud, understanding the modalities of fraud is one step toward designingorganizations, processes, and procedures that are not vulnerable to fraud.For example, we will see that most frauds are undertaken to cover up otherproblems, which implies that measures to reduce errors that might growinto fraudulent events will be one strategy to minimize the incidence offraud.

Fraudulent Trading for Personal Enrichment In some cases, the primary mo-tivation for or origin of the fraudulent trading appears to be personal gain.These are the cases that most closely fit our idea of rogue trading: hidingtrades, creating false trade entries, and so on, in the pursuit of a promotion,a larger bonus, or other direct reward. Barings, AIB/Allfirst, KidderPeabody, and Bank of Montreal most closely fit this paradigm. Interest-ingly, this category does not seem to cover the majority of fraud cases, oreven the majority of fraudulent trading cases.

Fraudulent Trading for Other Reasons (Usually to Cover Losses) Other casesinvolve fraudulent trading, but the intent was usually to cover a (relatively)small loss. Daiwa Bank, Codelco, Sumitomo, and National Australia Bank

9Miletic (2005).

Financial Risk Events 123

Page 143: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C04 02/15/2012 11:20:56 Page 124

fall into this category. Codelco is a nice example. A trader for the Chileanstate copper company was trading copper futures (as part of his normal job)and apparently entered a buy instead of a sell into a computer system. Thiswrong-way trade generated a $30 million loss. To try to make back the loss,the trader took unauthorized positions in copper, silver, and gold and grewthe loss to almost $210 million. Unfortunately, the evidence of Table 4.2,Panel A (and my own personal experience, cleaning up after such an inci-dent and not as a perpetrator), shows that this pattern is all too common:an otherwise innocent mistake leads to a loss that then leads to a fraudulentcover-up of losses, often with further trading that magnifies the loss.

Daiwa is another example—and one of the most egregious. The fraudapparently started as an attempt to hide a $200,000 loss early in the careerof a bond trader in New York, with the fraud continuing to save and protectreputation. The fraud was apparently not for personal benefit but, rather,on behalf and for the benefit of the bank. The fraud continued for 11 years.Management oversight at the branch was very poor, and senior managersmisled bank examiners and regulators, both before the trader’s confessionand more actively after. The bank’s U.S. license was revoked, and Daiwawas expelled from the United States.

Of course, one must view the statements of perpetrators who say thatthey did not act for personal enrichment skeptically, but in the Daiwa case(and other cases), there is reasonable evidence that offenders did not benefitdirectly, apart from the obvious benefit of retaining a job and more or lessstandard salary.10

Soci�et�e G�en�erale is a special case. It could be considered both as a caseof trading for personal enrichment and as an odd case of cover-up. Pub-lished reports indicate that J�erome Kerviel, the trader involved, originallyhid trades and created false entries to hide excessive profits, not hide losses.

Fraud Other than Directly Fraudulent Trading Ten events shown in Table 4.2,Panel A, involved fraud but not fraudulent trading, at least in regard to asingle trader executing and hiding trades against the employer’s interest:Showa Shell Sekiyu, Kashima Oil Co., CITIC Pacific, BAWAG, MorganGrenfell & Co., the state of West Virginia, China Aviation Oil, ManhattanInvestment Fund, Hypo Group Alpe Adria, and NatWest Markets. For all

10 In the case of Toshihide Iguchi and Daiwa’s losses on bond trading, even the U.S.prosecutor said as much (New York Times, September 27, 1995: www.nytimes.com/1995/09/27/business/an-unusual-path-to-big-time-trading.html). In the case ofNatWest Markets’ loss on mismarking swaption volatilities, the regulator (the Secu-rities and Futures Authority) concluded afterward that the event as a whole was notinspired by the pursuit of personal gain.

124 QUANTITATIVE RISK MANAGEMENT

Page 144: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C04 02/15/2012 11:20:56 Page 125

except Morgan Grenfell, the fraud involved covering up losses that weregenerated in some other way, usually to avoid revealing the losses to share-holders or regulators. Most or all of the losses were generated in relativelystandard business.11

Morgan Grenfell was an exception: The fraud involved setting updummy companies to avoid regulatory restrictions on a fund holding largeconcentrations in a single company. Investment in the companies was notillegal or fraudulent per se except for the regulatory prohibition on the con-centrations, although in retrospect the investments themselves appear tohave been based on very poor judgment.

Origin of Fraud Most of the cases of fraud shown in Table 4.2, Panel A,were motivated more by attempts to cover up a problem than by the goalof personal enrichment. Four out of 19 (Barings, AIB/Allfirst, Bank ofMontreal, Kidder Peabody, and possibly Soci�et�e G�en�erale) were primarilymotivated by personal gain. In contrast, 13 out of 19 were primarilymotivated by trying to cover up a problem or trading loss.

Various policies and practices are needed to avert fraud no matter whatits origin:12

& Separation of front-office and back-office (trade processing and P&Lreporting) responsibilities

& Mark-to-market accounting and timely reporting of P&L, with P&Ldisseminated up the management hierarchy

& Effective risk measurement and reporting architecture& Strong business line supervisory controls& Firm understanding by senior management of the business and products

traded

These policies and practices ensure that fraud is hard to execute (forexample, separation of front-office and back-office functions makes it hardto forge trade tickets) and that mistakes and unusual P&L get recognizedearly (mark-to-market accounting ensures problems are recognized). Highquality information and transparency are the first defense against fraud, but

11 BAWAG may be an exception. BAWAG apparently invested in a hedge fund thatundertook trading outside of BAWAG’s authorized investment rules, although itseems that senior BAWAG managers may have directed the hedge fund to do so.12 See Wilmer Cutler Pickering Hale and Dorr (2008) for a discussion of lessonslearned from trading loss events. The report focuses on rogue traders and five of theevents discussed here (Daiwa, Barings, AIB/Allfirst, Kidder Peabody, Soci�et�eG�en�erale).

Financial Risk Events 125

Page 145: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C04 02/15/2012 11:20:56 Page 126

availability of information alone is not sufficient: Managers must under-stand and be able to use the information.

As argued earlier, however, understanding the origin of or motivationfor fraud is also important for developing strategies to combat it. Certainstrategies are particularly effective against fraud motivated by personal gain:

& Ensuring that incentive systems do not encourage excessive risk.& Monitoring and scrutinizing successful traders as much as (or more

than) unsuccessful traders.& Ensuring that traders take regular vacations. (It is hard to maintain a

fraud when one is out of the office.)& Setting up a culture of compliance and responsible risk taking, starting

at the top with the board and senior management.

These strategies and practices are well accepted, but there are others,not as often highlighted, that are particularly important to avert fraud thatoriginates in trying to hide losses resulting from other sources:

& Designing systems and processes to make it easy for traders and back-office personnel to do the right thing and hard to do the wrong thing.

& Investing in people and infrastructure to streamline and automate oper-ational procedures to reduce operational errors.

& Setting up a culture that encourages employees to own up to mistakes.

Financial markets can be a complex, fast-moving, and confusing envi-ronment. Automation, checklists, and well-designed systems and proce-dures can smooth both front-office and back-office activity and make iteasier to do the right thing. For example, an option-pricing screen thataccepts an entry of ‘‘101.16’’ as a U.S. Treasury price of 101 16/32 can leadto confusion between (decimal) $101.16 and $101.50; this is a minor errorfor an option strike when it is far out of the money but potentially seriouswhen the option is at the money.

Table 4.4 summarizes the total number of events shown in Tables 4.1,4.2, and 4.3, categorized by whether there was fraud or not and whether theevent involved legitimate business activity that went wrong for one reasonor another. The categorization by fraud was just discussed; the categoriza-tion by legitimate business activity that went wrong is discussed next.

Norma l Bus i ness Ac t i v i t y Gone Wrong

It might seem odd to think of financial disasters as being the result of nor-mal business, but that is the case for many events. Table 4.2, left panel,

126 QUANTITATIVE RISK MANAGEMENT

Page 146: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C04 02/15/2012 11:20:56 Page 127

shows events for which fraud was not a primary issue. These events consti-tute 14 out of 35 (plus two for which I could not determine whether fraudwas involved) events. When we also consider events that did include fraud,we find that the majority of events were the result of or originated in legiti-mate trading, hedging, or commercial activity. In total, 23 out of 35 eventsoriginated in normal business activity that went wrong (with four eventsunknown or uncertain).

The meaning of ‘‘normal trading, hedging, or commercial activity thatwent wrong’’ needs a little clarification. It can be divided into three roughcategories:

1. Legitimate trading or hedging that was simply ill judged, not fraudulent2. Legitimate trading or hedging that involved fraud tangentially3. Speculation that started from a legitimate commercial activity

The meaning of these categories is best explained by considering thecases that fall under them.

Legitimate Trading or Hedging that Was Simply Ill Judged, Not Fraudulent(11 Cases) This category includes LTCM, Amaranth Advisors, OrangeCounty, Groupe Caisse d’Epargne, Askin Capital Management, WestLB,Bankhaus Herstatt, Merrill Lynch, Calyon, Union Bank of Switzerland, andMetallgesellschaft. Virtually all of these were financial or investment firmsthat were undertaking business they were intended to and with at least

TABLE 4.4 Summary of Events, Categorized by Fraud and LegitimateBusiness Activity

Fraud Present Number Legitimate Business Activity Number

Yes fraud 19 Trading/commercial origin 23Fraudulent trading 9 Trading 18Personal enrichment 4 No fraud 11Other reasons 5 Yes fraud 5

Not fraudulent trading 10 Uncertain 2Fraud to cover problems 13 Commercial activity, led to

speculation/fraud5

No fraud 14 Not trading/commercial origin 8Uncertain if fraud present 2 Uncertain origin 4Total 35 Total 35

Note: These counts summarize the data shown in Table 4.2.

Financial Risk Events 127

Page 147: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C04 02/15/2012 11:20:56 Page 128

some expertise in their area. (Metallgesellschaft, although not a financialfirm, is included because its hedging program was a significant part of thebusiness strategy rather than an ancillary activity.) After the fact, one canargue that their positions were inappropriate, too large, even irresponsible,but that argument is always easier to make after rather than before. LTCM,for example, was a hedge fund with large positions in a variety of markets,and the positions were particularly large in swap spreads and equity volatil-ity. The fund was highly leveraged and lost virtually all its capital, but therewas no malfeasance or wrongdoing. In some cases, there was trading inexcess of limits (Group Caisse d’Epargne, Merrill Lynch, Calyon, and prob-ably Bankhaus Herstatt), although not what I would judge as outrightfraud. Metallgesellschaft was a case of a commercial firm hedging itsbusiness activity.

Legitimate Trading or Hedging that Involved Fraud Tangentially (Five Cases)This category covers financial or investment firms that were involved inlegitimate business but fraud was involved to cover up losses or some otherproblem. That is, the fraud was not central to the loss. This category in-cludes BAWAG, West Virginia, Manhattan Investment Fund, NatWestMarkets, and Morgan Grenfell. West Virginia is a good example. The losswas the result of investing short-term funds (from a fund that pooled short-term assets of local governments) in long-term bonds with substantial lever-age. (The situation, by the way, was remarkably similar to OrangeCounty’s. The substantive difference is that in the Orange County case,there was no cover-up after the losses.) Manhattan Investment Fund was afamous fraud, but the loss itself appears to have resulted simply from astrategy to short technology stocks during the tech bubble (a strategy thatwas ultimately correct but, in this case, executed too early).

Speculation that Started from a Legitimate Commercial Activity (Five Cases)This is an interesting and important category—nonfinancial firms thatundertook speculative or other trading that led to large losses. It includesAracruz, Sadia, Showa Shell Sekiyu, Kashima Oil, CITIC Pacific, and possi-bly China Aviation Oil Corporation. Aracruz and Sadia were Braziliancompanies that apparently moved from legitimate hedging of export earn-ings to leveraged speculation and are discussed in more detail further on.Showa Shell Sekiyu and Kashima Oil were two Japanese companies thatspeculated in FX, with the speculative activity probably originating in hedg-ing FX payments related to oil imports. These two cases are particularlyimportant because they highlight the importance of marking to market andrecognizing losses early. Press reports indicate that Kashima Oil’s losses ac-cumulated over six years (and Showa Shell Sekiyu’s over an unspecified but

128 QUANTITATIVE RISK MANAGEMENT

Page 148: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C04 02/15/2012 11:20:56 Page 129

comparable period). Under Japanese accounting rules of the time, the lossescould be rolled over and effectively hidden from shareholders. CITIC’slosses appear to have originated in an attempt to hedge an acquisition in aforeign currency, but the hedge was highly levered for some reason. Some ofthese cases involved fraud (Showa Shell Sekiyu, Kashima Oil, CITIC Pacific,China Aviation Oil), and others did not (Aracruz, Sadia).

The first category, legitimate trading, is particularly important whenconsidering risk management for financial institutions. Ten of the 11 cases(excluding Metallgesellschaft) involve financial or investment firms under-taking normal financial or investment activity. These events raise some fun-damental questions about managing risk. Fraud is easy to categorize asillegal and unethical (even if the fraud itself can be difficult to identify), andthere is no question that fraudulent activities should be prohibited. For le-gitimate financial activity, in contrast, there is no good way to distinguishbetween good activity that leads to profits and bad activity that leadsto losses.

The bottom line is that there is no unambiguously good versus bad fi-nancial activity. Some investments or trading strategies are better thanothers, but trading and investing is risky and involves taking positions thatmay or may not work out, which is what makes managing risk, like manag-ing any other part of a business, difficult and challenging.

Note that some frauds listed in Table 4.2, originated in legitimate trad-ing activity (Sumitomo, Daiwa, National Australia Bank, and Codelco), butI do not include these as normal business because fraud was the centralcomponent of the event.

The columns ‘‘Failure to Segregate Functions’’ and ‘‘Lax Trading Su-pervision or Management/Control Problem’’ show that the nonfraudulentlosses in Table 4.2, for ‘‘financial institutions’’ are not predominantly theresult of operational or supervisory problems. (This finding is naturally incontrast to cases of fraudulent trading, in which failure to segregate func-tions or supervisory problems are usually present.) Among these 10 cases(excluding Metallgesellschaft), there were no cases of failure to segregatefront- and back-office functions. In four cases (LTCM, Amaranth, OrangeCounty, and Askin Capital Management), lax supervision or managementand control issues did not appear to be an issue. For Bankhaus Herstattand UBS, the trading activity (FX trading for Herstatt, equity derivativesrelated to Japanese bank convertible preference shares for UBS) was notsupervised with the same rigor or integrated as fully as other activities atthe bank. For Merrill Lynch’s mortgage trading, the trader reportedlyexceeded trading authorizations. The other three (Caisse d’Epargne,WestLB, and Calyon) may have involved lax trading supervision or othercontrol issues.

Financial Risk Events 129

Page 149: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C04 02/15/2012 11:20:56 Page 130

When we turn to nonfinancial institutions (those firms for which ‘‘Pri-mary Activity Finance or Investing’’ is no), we also find evidence of financialdisasters that originated in normal business practices. Aracruz and Sadia spec-ulated in the FX markets and lost large amounts, but according to a bankerfamiliar with the Brazilian markets, this trading was relatively common andoriginated from standard business practices. Both firms had large export busi-nesses and thus generated revenues in dollars versus costs in Brazilian reals.Standard business practice would be to hedge future export receipts by sellingdollars forward. For many years, this trade was also a profitable speculativestrategy because of the differential between Brazilian real and U.S. dollar in-terest rates. High Brazilian and low U.S. interest rates meant that forward FXrates implied depreciation of the real, but in fact, the real was relatively stablefor a long period. This situation led many firms to move from hedging futureexport earnings to leveraged speculation.13 The real depreciated dramatically,starting in August 2008, which led to large trading losses. Although this de-preciation did not last long, the losses were large enough that the firms wereforced to close out and crystallize their losses.

Among nonfinancial institutions, even those events that did involvefraud usually originated in some way from a normal business activity.Showa Shell Sekiyu’s and Kashima Oil’s events probably originated in hedg-ing import or export earnings, CITIC Pacific’s event appears to have been ahedging transaction that was the wrong size, and China Aviation Oil mayhave started hedging jet fuel purchases.

O t her Charac t er i s t i c s

In addition to fraud and normal business activity, two other characteristicsneed to be discussed.

Years over which Fraud Accumulated Events involving fraud generally alsoinvolve longer periods over which losses accumulate, which is natural be-cause the goal of fraud is to hide losses and delay the day of reckoning. Wemight also think that losses over a longer period would be larger becausethere would be more time for losses to accumulate, but the largest-lossevents in Table 4.2 (LTCM, Soci�et�e G�en�erale, and Amaranth) were actuallylosses over a short period. There are competing influences: longer means

13An alternative strategy, one that had the same economic impact, was for a Brazil-ian company to borrow in dollars (paying low U.S. interest rates) and pay the debtback out of future earnings in reals. This strategy worked well as long as the real didnot depreciate substantially; if it did, it would leave the borrower with substantialforeign currency liabilities and FX losses.

130 QUANTITATIVE RISK MANAGEMENT

Page 150: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C04 02/15/2012 11:20:56 Page 131

more time for losses to accumulate, but larger losses come to light fasterbecause they threaten the existence of the institution. In fact, of the threelargest events, two resulted in the collapse of the institution (LTCMand Amaranth).

Failure to Segregate and Lax Supervision ‘‘Failure to segregate functions’’refers to the failure to separate trading and back-office or record-keepingfunctions, with Nick Leeson’s responsibility for both trading and back-office at Barings being probably the best-known example. Although thisfault has been highly publicized, it does not appear to have been a substan-tial factor in most events—only 3 out of 22 (with 13 unknown or difficult todetermine). One reason may be the emphasis segregation of responsibilitieshas been given in regulations and best practice guidelines: in recent years,firms have learned to close this gap.

‘‘Lax trading supervision or management/control problem’’ refers tothe failure by managers to properly supervise traders or otherwise exercisecontrol. This issue has been a factor in many events (12 out of 21, with 8unknown and 6 difficult to determine). I have included under this rubric awide range of problems, from the extraordinary (the behavior of Daiwamanagers that eventually led to Daiwa’s expulsion from the U.S. bankingmarket) to the all too common (the failure of managers to fully understandor appreciate the risk of products or businesses that subordinates wereundertaking, as appears to have been contributing factors with Union Bankof Switzerland and Merrill Lynch in 1987).

Summary

Fraud is a part of many financial disasters, but it is often used to cover upafter losses rather than be involved in the original loss. Nonfraud events arealmost as common as fraud-related events. Losses resulting from normalbusiness characterize many events. Some of the largest, in fact, were simplybad judgment or bad luck, not involving fraud or the exceeding of tradinglimits or mandates. LTCM, Amaranth, Union Bank of Switzerland, andAskin Capital all seem to fall in this category.

Lax trading supervision or other management/control problemscontributed to many incidents. Failure to separate trading and back-officefunctions, however, has not been as prevalent, possibly because it is such awell-recognized problem.

L essons Learned

One valuable product of reviewing financial disasters is to learn lessons onhow to better manage a firm. (And it is important to recognize that issues

Financial Risk Events 131

Page 151: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C04 02/15/2012 11:20:56 Page 132

contributing to financial disasters are often general management issuesrather than specific risk issues.) The short paper ‘‘Rogue Traders: Lies,Losses, and Lessons Learned’’ (WillmerHale 2008) provides an excellentsummary of the topic and reviews a few of the episodes considered here.The discussion is focused specifically on rogue traders (unauthorized trad-ing involving fraud—Daiwa, Barings, AIB/Allfirst, Kidder Peabody, andSoci�et�e G�en�erale), but it is quite useful generally. Of note, the appendix pro-vides a ‘‘lessons learned’’ checklist (p. 10), supplemented by my ownobservations:

A. Setting the right tone from the top: senior management and boards mustencourage a culture of compliance and responsible risk taking

B. Senior managers must understand the complexities of the products theirfirms trade

C. Strong operations and middle-office process and infrastructure—to min-imize errors, catch errors that do occur, and identify exceptions

D. Strong business line supervisory controls are essentialE. Successful traders may require more, not less, scrutinyF. Management should ensure that incentive systems do not encourage

excessive riskG. Vacations are a good thing [because they force somebody else to manage

the positions, shedding light on any nefarious activity]H. Risk managers should be encouraged to challenge traders’ valuationsI. Operations, risk management, and compliance reporting lines should be

separate from the business linesJ. Dual or matrix reporting lines must be clearK. Strong back-office controls are as essential as front-office controlsL. Effective risk management architecture is critical

4 .3 SYSTEM IC F I NANC IA L EV ENTS

When we move from idiosyncratic to systemic financial events, we movefrom small potatoes to real money. Although idiosyncratic losses may bemeasured in hundreds of millions of dollars, systemic losses are measured inhundreds of billions.

Systemic financial events come in a variety of forms: hyperinflation andcurrency crashes, government debt default or restructuring, and bankingcrises. This section touches only the surface. A wide literature covers thetopic: Mackay (1932), originally published in 1841, provides an entertain-ing look at the South Sea Bubble in England, the Mississippi scheme inFrance, and the tulip mania in Holland. Kindleberger (1989) is a classic

132 QUANTITATIVE RISK MANAGEMENT

Page 152: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C04 02/15/2012 11:20:56 Page 133

work on asset manias and crashes, and Reinhart and Rogoff (2009) is acomprehensive and instructive compendium of financial crises across 800years and more than 60 countries.

Table 4.5 shows what Reinhart and Rogoff call the ‘‘big five’’ crises inadvanced countries from World War II through mid-2000 (that is, beforethe Great Recession of 2008–2009). Reinhart and Rogoff briefly discuss thebailout costs of financial crises. They point out that the estimates varywidely and, more importantly, that the true costs extend beyond the com-monly quoted bailout costs to cover the fiscal impact of reduced tax revenueand other fiscal stimulus costs. Whatever the true costs, however, they arelarge. Table 4.5 shows that the 1984–1991 U.S. savings and loan (S&L) cri-sis cost somewhere between 2.4 percent and 3.2 percent of GDP. Stated interms of 2007 GDP (to be comparable with the losses quoted in Table 4.1),it would be roughly $340 billion to $450 billion. Compared with thisamount, the individual company losses are small.

If we turn to the Great Recession of 2008–2009, the costs are similarlyhuge. Consider just Fannie Mae and Freddie Mac, which were taken overby the government in late 2008 as the subprime housing crisis exploded.Fannie Mae reportedly lost $136.8 billion in the two-and-a-half years fromthe fourth quarter of 2007 through the first quarter of 2010. As of May2010, the U.S. government has provided $145 billion in support to FannieMae and Freddie Mac.14 The Congressional Budget Office projects that thetotal cost may reach $389 billion.15 Note that this is only a fraction of thecost for the overall U.S. financial meltdown, and the United States is only apart of the overall global damage.

Fannie Mae and Freddie Mac are also important because they areexamples of the systemic nature of the incentives, costs, and policy decisionsthat contribute to systemic crises. Fannie and Freddie have suffered suchlarge losses as much because they were following their congressional man-date—to subsidize the U.S. residential housing market and expand accessand affordability—as because they made specific management or risk mis-takes. For decades, investors assumed that an implicit U.S. guarantee (nowmade explicit) stood behind Fannie and Freddie paper, and investors pro-vided funding at rates better than those other financial institutions couldaccess. This situation skewed costs and incentives in the mortgage market,contributing to Fannie’s and Freddie’s large holdings and large losses andalso contributing to the overall residential real estate bubble. The skewed

14As of May 2010, according to the New York Times (Applebaum 2010) andBloomberg (May 10, 2010).15Da ta as of J un e 20 10. Se e www .c bo.g ov /ft pd ocs/ 10 8x x/ doc 108 78 /0 1-1 3-F ann ieFreddie.pdf.

Financial Risk Events 133

Page 153: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C04 02/15/2012 11:20:57 Page 134

TABLE4.5

SelectionofSystem

icBankingCrisesforDeveloped

Countries(priorto

2007)

Estim

atedBailout

Country

Upper

Lower

Note

Spain,1977–1985

16.8%

5.6%

Apparentlyapersistenteconomicslump,theaftereffect

ofOPEC’soilprice

rise

inthemid-1970s,andthetransitionto

dem

ocracy

ledto

afinancialcrisis—

‘‘52banks(of110),representing20percentofbankingsystem

deposits,were

experiencingsolvency

problems.’’

United

States(S&L

crisis),1984–1991

3.2

2.4

FinancialderegulationandtheaftereffectsofRegulationQ

ledto

overextension

bymanyS&Ls.‘‘M

ore

than1,400S&Lsand1,300banksfailed.’’

Norw

ay,1987–1993

4.0

2.0

‘‘Financialderegulationundertaken

during1984–1987ledto

acreditboom

...

accompaniedbyaboom

inboth

residentialandnonresidentialrealestate.’’

Problemsatsm

allbanksbeganin

1988.‘‘Theturm

oilreached

system

icproportionsbyOctober

1991,when

thesecond-andfourth-largestbankshad

lostaconsiderableamountofequity.’’

Sweden,1991–1994

6.4

3.6

Afinancialandrealestate

bubbledeveloped

inthe1980s.Avarietyoffactors

(led

bythe1990globalslowdown)causedthebubbleto

burst.‘‘Overall,fiveofthe

sixlargestbanks,withmore

than70percentofbankingsystem

assets,

experienceddifficulties.’’

Japan,1992–1997

24.0

8.0

Astock

market

andrealestate

bubbleburstaround1990.Bankssuffered

from

sharp

declines

instock

market

andrealestate

prices.

Notes:Theseare

thebig

fivecrises

ofdeveloped

countriesfrom

WorldW

arIIthroughmid-2000mentioned

inReinhartandRogoff

(2009,164).The‘‘Estim

atedBailout’’columnsshow

costsaspercentageofGDP(Table

10.9).The‘‘Note’’columnisbasedon

Laeven

andValencia(2008)andissupplementedwiththecurrentauthor’scomments.

Sources:BasedonReinhartandRogoff(2009)andLaeven

andValencia(2008).

134

Page 154: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C04 02/15/2012 11:20:57 Page 135

incentives were and continue to be government policy, and these skewedincentives contributed to a systemic crisis whose costs overshadow any idio-syncratic disaster.

A number of firms that were involved in idiosyncratic events listed inTable 4.1 were also caught in the systemic crisis of 2008–2009. The lossesresulting from systemic problems were many times the losses caused byidiosyncratic events.

As an example, Hypo Group Alpe Adria shows up in Table 4.1 as losingD300 million in 2004 because of a currency swap (with subsequent fraud tohide the extent of the loss). This amount pales next to Hypo’s recent creditand other losses. In December 2009, the bank was taken over and rescuedby the Republic of Austria. The aftertax loss for 2009 was D1.6 billion,and as of early 2010, the problems were continuing. As another example,Dexia Bank suffered an idiosyncratic loss of D300 million in 2001, butlosses for 2008 were D3.3 billion and required state aid from Belgium,France, and Luxembourg.

4 .4 CONCLUS I ON

Reading about financial disasters helps to provide feedback, reminding usthat disasters, fraud, and just plain bad luck do happen. Properly used andanalyzed, the events can help us learn what to do and what not to do inmanaging a financial firm. Many disasters are precipitated by simple andobvious mistakes. Rather than gloat over another’s adversity, though, weshould take away the lesson that it is all too easy to fall prey to suchmistakes.

This book focuses on idiosyncratic risk, and this chapter has focusedprimarily on idiosyncratic risk events—events that are triggered by circum-stances within a single firm and limited to that firm. This is in contrast tosystemic or macroeconomic risk events that play out across the whole econ-omy, or even globally. Systemic risk events, however, are far more damag-ing because they involve substantial dislocations across a range of assets andacross a variety of markets. Furthermore, the steps a firm can take to fore-stall idiosyncratic risk events are often ineffective against systemic events.

Financial Risk Events 135

Page 155: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C04 02/15/2012 11:20:57 Page 136

Page 156: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C05 02/14/2012 12:47:5 Page 137

CHAPTER 5Practical Risk Techniques

The discussion so far has been how to think about risk and uncertainty ingeneral terms. We now turn to the specifics of financial risk measure-

ment and management. We introduce in this chapter some of the tools ofquantitative risk measurement and show how they apply in practice. Thegoal is to present the ideas and intuition, showing how the tools are used,and to avoid mathematical and technical complication. The details, themathematical background and formulae, are absolutely important, butthese details are left for later chapters.

This chapter is aimed at two audiences, two groups that often speakdifferent languages and inhabit different worlds, but groups that need towork together for the effective management of risk.

The first group are the managers running the firm, trading desk, or port-folio. These managers make the business decisions but often will not havestrong technical training—they are consumers of risk measurement servicesrather than producers of them.

The second group is the risk professionals, or quants, responsible forproducing the risk reports and other services. They will generally have astrong technical and mathematical background but often will have lessexperience in managing a business, communicating ideas in a nontechnicalway, and the soft skills of interpersonal interactions.

My goal for both groups is to explain how to think about and use thequantitative tools such as volatility or VaR (value at risk). Each group canhave its own challenges in understanding and using risk tools, but thesechallenges arise from opposite poles of a continuum.

The management group understands how the business works and intui-tively how risk affects decisions, but tends to have less grasp of the mathe-matical arcana behind risk tools. The technical details often present adeterrent to understanding and using these tools. The ideas that risk toolstry to capture are not complicated, and when properly explained, the toolscan be stripped of much of the technical jargon. This chapter attempts to

137

Page 157: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C05 02/14/2012 12:47:5 Page 138

explain to a manager how their business decision making can be enhancedwith proper understanding of risk tools.

The risk professional group has a firm grasp of the mathematical andtechnical details going into the calculation of the risk measures, but usuallyhas less experience with the running of the business and how risk considera-tions enter into management decisions. This chapter attempts to explain toa risk professional how risk tools are used in making business decisions.

This chapter is focused on the common ground shared by the two audi-ences, on using information about the P&L distribution to manage the busi-ness. Managers need the tools and the understanding to use the P&Ldistribution. Risk professionals need to provide the tools, advice, and train-ing to support the whole organization in using the P&L distribution to ef-fectively manage risk. The common ground, however, does not include thecountless details of estimating the P&L distribution. Managers need to usethe distribution and need to have the confidence that the distribution is areasonable estimate, but usually will not care about the technical details ofhow it is estimated. Risk professionals need to produce the distribution, andneed to assure outside users that the estimate is reasonable, but do not needto communicate all the gory details.

Risk management is effective when the two groups—managers and riskprofessionals—work together. More often than not, the skills of the twogroups—the management skills to use the P&L distribution versus the tech-nical expertise to produce the P&L distribution—reside in separate individ-uals with different backgrounds and skills. An effective organization is onein which these two groups work together to solve the firm’s problems. Themanagers must stretch to use unfamiliar tools and understand an unfamiliarlanguage while the risk professionals must strive to provide complex con-cepts and data in a simple, direct manner.

5 .1 VALUE OF S IMPL E , APPROX IMATE ANSWERS

One theme of this chapter is the value of simple, approximate answers. It isbetter to have a broad outline of the risk today than a meticulous anddetailed understanding one year hence (after the business has blown up).Physics students are taught to estimate orders of magnitude for physicalproblems. The delightful ‘‘Order of Magnitude Physics’’ (Sanjoy, Phinner,and Goldreich 2006) is devoted to the topic.

Most technical education emphasizes exact answers. If you are aphysicist, you solve for the energy levels of the hydrogen atom tosix decimal places. If you are a chemist, you measure reaction rates

138 QUANTITATIVE RISK MANAGEMENT

Page 158: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C05 02/14/2012 12:47:5 Page 139

and concentrations to two or three decimal places. In this book,you learn complementary skills. You learn that an approximate an-swer is not merely good enough; it’s often more useful than anexact answer. When you approach an unfamiliar problem, youwant to learn first the main ideas and the important principles, be-cause these ideas and principles structure your understanding of theproblem. It is easier to refine this understanding than to create therefined analysis in one step.

So it is with understanding financial risk. We always meet new risks andit is incredibly important to have tools and techniques for understanding themain ideas and the broad outline of the risks. Effective risk managementoccurs when a firm can integrate management expertise with technicalskills.

5 .2 VOLAT I L I TY AND VALUE AT R I SK (VaR )

Before discussing volatility and VaR we need to think about what financialrisk measurement is. Financial risk is in some ways so simple, because it is allabout money—profit and loss and the variability of P&L. Political risk isabout the possibility of insurrection, expropriation—or the Republicans win-ning the next presidential election. Risk in flying an airplane is that an enginemay flame out or that fog may obscure the landing strip. Risk is multifacetedand amorphous in so many areas. For a financial firm, the primary focus iswhether tomorrow or next year will show a profit or a loss. Of course, otherthings matter, but for a financial firm those other things are dominated bythe profit and loss—the P&L. What generates the P&L is multifaceted andpossibly amorphous but the P&L itself is pretty concrete and simple. Moneyis something we can measure, something most of us can agree about, thatmore is better than less and a profit is good and a loss is bad.

From this, it follows that the distribution of P&L is what matters whendiscussing financial risk. Let’s take an extremely simple financial business,betting on the outcome of a coin flip. We make $10 on heads and lose $10on tails. We could graph the P&L distribution as in Panel A of Figure 5.1.The probability is one-half of losing $10 and one-half of making $10. Thiskind of distribution is fundamental to how we should think about financialrisk. It shows us the possible outcomes (possible losses and gains along thehorizontal) and how likely each of these is (probability along the vertical).

For managing risk, the main thing that we want from the P&L distribu-tion is an understanding of how variable the P&L can be. In this example, itis very simple—either –$10 or þ$10. In practice, it is more complicated and

Practical Risk Techniques 139

Page 159: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C05 02/14/2012 12:47:5 Page 140

we want to know things like how much we might make or lose on a stan-dard day, and how much might we make or lose if things go badly.

When flipping a coin we can have some real confidence that each out-come is actually one-half; in a real business, we cannot have such confidencethat the probabilities tomorrow will be exactly as we believe them to be.This goes back to the idea of frequency-type versus belief-type (objectiveversus subjective) probabilities. But we really have no choice in financialrisk management except to use our belief-type probabilities. We must usethem with caution, have some humility that our estimates will be wrong,but use them we must.

Panel B of Figure 5.1 shows a more realistic P&L distribution. The pos-sible losses and gains are along the horizontal but, in contrast to Panel A,there may be any of a wide range of possible outcomes. The most likely out-come is somewhere around zero, but there is some possibility of large profitsand some possibility of large losses.

Financial risk measurement is really nothing more than what is shownin Figure 5.1—the P&L distribution. When we know the P&L distribution,know the possibilities of gains versus losses, when we understand what gen-erates the distribution and what causes those gains and losses, then we un-derstand virtually everything we can about financial risk.

We don’t know what will happen tomorrow because we never canknow with certainty what will happen tomorrow. But we have some rangeon the possibilities. The distribution in Figure 5.1 shows us the possibilities.We have to give up on certainty and embrace uncertainty. We have to moveaway from thinking ‘‘the P&L tomorrow will be $50,000’’ to ‘‘the P&Lwill most likely be between –$50,000 and þ$50,000, but there is a 5 per-cent chance we will lose $150,000 or worse.’’ This is not easy and a wrench-ing change from our natural inclination, but it is the essence of a matureunderstanding of risk.

+$10–$10 $0 ProfitLoss

A. Coin Toss Bet B. Hypothetical Yield Curve Strategy

½ ½

FIGURE 5.1 P&L from Coin Toss Bet and Hypothetical Yield Curve StrategyReproduced from Figure 5.1 of A Practical Guide to Risk Management,# 2011 bythe Research Foundation of CFA Institute.

140 QUANTITATIVE RISK MANAGEMENT

Page 160: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C05 02/14/2012 12:47:5 Page 141

In this sense, financial risk is extraordinarily simple—we need only tounderstand the P&L distribution. In practice, of course, financial risk isnever simple. The difficulties are twofold. The first is purely conceptual,coming to grips with life as a distribution rather than a unique outcome(‘‘P&L will be between –$50,000 and þ$50,000 for two out of every threetrading days but will be worse than $150,000 roughly once a year’’).

The second difficulty is that we will never know the P&L distributionwith certainty, and even arriving at a reasonable estimate can be quite diffi-cult. But this is part of living with uncertainty—we have to work hard toenvision the possible future outcomes, to learn about how the world is andhow it might be. Even more difficult, we have to accept that even our P&Ldistribution has uncertainty and so we need to treat all our numbers withrespect and caution.

Es t ima t i n g t h e P&L D i s t r i bu t i o n

In this chapter I ignore the details of various approaches and all the problemsassociated with estimating the P&L distribution. Later chapters will deal withthese, but for now we just assume that we have a reasonable estimate.

The easiest way to understand how to use the P&L distribution is towork with an actual example. Turn to Figure 5.2, which is an estimate ofthe one-day P&L distribution for holding $20 million of a U.S. Treasurybond (the 10-year as of January 2009, the 3.75 percent of November 15,

Location (mean = 0)Scale(standard deviation,

e.g., $130,800)

FIGURE 5.2 P&L Distribution for U.S. TreasuryBond, Showing Volatility (Standard Deviation)Based on Figure 5.2 from A Practical Guide to RiskManagement,# 2011 by the Research Foundation ofCFA Institute.

Practical Risk Techniques 141

Page 161: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C05 02/14/2012 12:47:6 Page 142

2018). From looking at the graph, we can tell a few things, before talkingabout any numbers:

& The price will go up or down with roughly equal chance.& Most likely we will see no change or a small change one way or the

other.& Large changes are less likely than small changes.& Large changes do occur.

Summary Measures—Vo l a t i l i t y and VaR

There is real value, however, in actually putting numbers around this. Num-bers help systematize and organize our thoughts. But they are only tools tohelp us understand the world, and this means two things. First, we have tounderstand what the tools and the numbers mean. Numbers without under-standing are worse than useless—they mislead, obfuscate, or give a falsesense of security.

Second, the numbers are our servants, not we theirs. We can use themto understand the world better but we should not be slaves to them. Num-bers should help us be honest with ourselves and they should aid in commu-nicating with others. They should not obfuscate and confuse. Managersshould never accept numbers that are not simple, clear, and concise. Riskprofessionals should always strive to communicate clearly and to enlightenusing numbers, but must remember that the numbers must communicatesomething about the real world. ‘‘It’s not the figures themselves, it’s whatyou do with them that matters.’’

So let’s put some numbers around the distribution. The most importantaspect of the distribution (from a risk management perspective) is the varia-bility, the dispersion, the spread of the distribution. We would like a singlenumber that would tell us the dispersion of the distribution. But there is noone best way to summarize the dispersion. Indeed, the ‘‘dispersion of thedistribution’’ is a rather vague concept. Like learning to live with uncer-tainty itself, we have to learn to live with some vagueness and ambiguity indescribing the variability or dispersion of the P&L distribution.

For risk professionals, this vagueness or ambiguity may be one of thehardest things to master in moving from merely measuring risk to a fullerunderstanding of managing risk. Managers, through inclination and experi-ence, tend to have a much higher tolerance for living with ambiguity andvagueness.

Getting the right balance between vagueness and precision is difficult.Using and understanding quantitative tools requires a careful balance be-tween too much vagueness (where we can’t say anything useful) and false

142 QUANTITATIVE RISK MANAGEMENT

Page 162: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C05 02/14/2012 12:47:6 Page 143

precision (where we can be very precise about things with no connection tothe real world).

There are two common measures used to summarize the dispersion orvariability. (Although there are others, these two are the most common; ifyou truly understand everything about these, you will understand morethan 90 percent of financial professionals.) The two measures are volatility(also known as standard deviation) and Value at Risk (VaR). Personally, Iuse volatility more and generally prefer to work with it, although this maybe a minority opinion among risk professionals. When properly used, vola-tility and VaR are both useful. They will by and large tell us the same infor-mation, although there will be times when they can provide meaningfullydifferent views of the distribution.

Vo l a t i l i t y f or t he Bond

The easiest way to understand volatility and VaR is to turn back to thegraph of the P&L distribution. Volatility and VaR are calculated in differentways but both describe what is the spread or dispersion of the distribution.

Figure 5.2 shows the volatility, or standard deviation. For the $20 mil-lion bond position, the volatility is $130,800. The volatility measures thespread around the central value (around the mean). It is calculated as anaverage of squared deviations from the mean. That is, for every possibleprofit, we calculate the distance from the mean, square that distance, andtake the average of the squares (finally taking the square root):

Volatility ¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

Average Profit �Mean½ �2q

It is important to understand that volatility is an average of grosschanges, not net changes. When we consider changes over time, it is naturalto think that the ups and downs cancel out, leaving us with a net change(generally close to zero). But volatility is the square root of the more funda-mental statistical concept called the variance, and the variance is the averageof squared changes. Positives and negatives do not cancel out. We use vola-tility (square root of the variance) rather than the variance itself because vol-atility is in the same units as P&L and prices and thus more intuitive.

The most important thing in using volatility (or VaR, for that matter) isunderstanding what it tells us and how to use the information. Figure 5.2helps us understand how to use volatility. The volatility tells us the spreadfor the distribution. For most well-behaved distributions, roughly 30 percentof the outcomes will be better or worse than the volatility. For our bond, thevolatility is $130,800 and so we should expect that 30 percent of the time,

Practical Risk Techniques 143

Page 163: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C05 02/14/2012 12:47:6 Page 144

or roughly one day in three, the loss will be worse than –$130,800 or betterthan þ$130,800.

How do we use this? We need to examine things like our internal toler-ance for gains versus losses or the firm capital that is available to absorblosses. Can we live with a loss of $130,800 on a regular basis? How muchdoes such a loss worry me? Does it make my stomach churn? Or is that lossreally small, maybe too small? And remember that $130,800 sets thelevel for ‘‘standard trading conditions’’ and that the P&L will be between–$130,800 and þ$130,800 two days out of every three. But that third daycan sometimes be a very bad day—every once in a while there will be lossesmuch worse or profits much better than $130,800. How much worse? We’lldiscuss the issue in more detail shortly, but for now, just gut feeling gets us along way—maybe asking if I would lose my job if losses were three or fourtimes that. These questions start to set a scale for how much risk is in the$20 million bond position.

In many cases, such as managing a portfolio, a useful aid in informingour gut instinct is to calculate the volatility as a percentage of the assets.Then I can compare that volatility with other investments, things I mayhave long familiarity with.

Say I was a portfolio manager with $500 million in assets. In this case, aloss of $130,800 would be really small, only 0.03 percent of the portfoliovalue. This would translate into roughly 0.4 percent annualized volatility.(We translate from daily to annual by multiplying by the square root of thedays in a year—roughly 255 trading days—but we discuss this more furtheron.) Such a figure seems really small relative to, say, the stock market vola-tility (on the order of 20 percent annualized). This comparison helps to in-form us about the bond position.

D I GRESS I ON ON THE NORMAL D I STR I BUT I ON

The normal distribution is used so much in risk measurement that wereally need to discuss it, even though the debate about the pros andcons of the normal distributions is often more esoteric than we areinterested in here.

The distribution of P&L usually looks something like that shownin Figure 5.2—bell-shaped with most of the probability toward thecenter and low probability of large gains or large losses. The normaldistribution is the paradigm, or model bell-shaped distribution. Math-ematicians have worked with it for many centuries, and it is, as thesethings go, easy to use. Figure 5.3 shows a sample normal distribution.

144 QUANTITATIVE RISK MANAGEMENT

Page 164: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C05 02/14/2012 12:47:6 Page 145

Sixty-eight percent of the probability is within �1s of the mean (so 32percent outside of �1s, roughly 15 percent less than –1s and 15 per-cent more than þ1s).1

We are most interested in the probability that an observation will bewithin a certain distance of the mean, say �1s or �2s. The probabilitythat an observation will be within �2s is 95 percent, and the probabil-ity it will be below –2s is 2.5 percent. If the P&L is normally distrib-uted, then we can say that the probability the P&L will be below –2s is2.5 percent. The probability it will be below –1s is about 16 percent.

We do have to be careful in using the normal distribution. Thenormal is valuable as an aid to intuition, to understand how and whyrisk measurements behave as they do. But it is not a perfect descriptionof the real world. Sometimes the P&L distribution looks more like Fig-ure 5.4, and in this case there will be losses larger than we would thinkwere we to really believe the normal distribution. In other cases, therewill be more large moves (both positive and negative) than the normaldistribution would lead us to believe. As in so many aspects of riskmanagement, we have to always use these tools carefully.

0

95%

Standard Deviation

68%

–1–2–3 1 2 3

FIGURE 5.3 Normal DistributionReproduced from Figure 5.4 of A Practi-cal Guide to Risk Management,# 2011by the Research Foundation of CFAInstitute.

1 The rule that 32 percent of the probability falls outside of �1s is strictly true forthe normal distribution. For most reasonable P&L distributions we run into in fi-nance, it will be somewhere on the order of 20 percent to 30 percent. In other words,under standard trading conditions, P&L will be within �1s roughly one day out ofthree or four or five.

Practical Risk Techniques 145

Page 165: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C05 02/14/2012 12:47:6 Page 146

VaR f or t he Bond

Figure 5.5 shows the same bond P&L distribution as in Figure 5.2 but nowwith both the volatility (standard deviation) and the 5%/95% VaR drawn in:Panel A shows the volatility and Panel B shows the VaR. The VaR is simplythe level of loss where there is a 5 percent chance losses will be worse and95 percent chance they will be better. The VaR is nothing more and nothingless than one way to summarize the spread, or dispersion, of the bond.

The volatility is one number that summarizes the spread and is calcu-lated by taking the average of squared deviations. The 5%/95% VaR is an-other number that summarizes the spread and is calculated by setting the

When using a normal distribution the volatility (standard devia-tion) tells us virtually everything about the distribution. For a normaldistribution, we know how much probability is in the central part rel-ative to the tails, how much in the upper versus lower tail. The stan-dard deviation sets the scale, or the dispersion, and once we know thatthe standard deviation of a normal distribution is $130,800, we cancalculate the exact probability of a loss of $256,000 or worse (2.5 per-cent chance) or $304,000 or worse (1 percent chance). For a normaldistribution, the volatility tells us virtually everything.

But we can use the volatility even if the distribution is not normal.We can calculate the volatility for any set of data and for any distribu-tion (within reason). For a non-normal distribution, the standard devi-ation is only one among various ways we might summarize thedispersion, but it is still incredibly valuable in giving us a first view ofthe dispersion. It will not tell us everything (as it does for a normaldistribution) but we should still use it for what it can tell us.

FIGURE 5.4 NonsymmetricalDistribution with Fat TailReproduced from Figure 5.5 of APractical Guide to Risk Management,# 2011 by the Research Foundationof CFA Institute.

146 QUANTITATIVE RISK MANAGEMENT

Page 166: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C05 02/14/2012 12:47:6 Page 147

chance of worse losses at 5 percent. (Remember that for a normal distribu-tion, the probability of losses worse than the volatility is about 16 percent.This means that when P&L is normal, the volatility can be thought of as the16%/84% VaR.)

It is absolutely critical to remember that what really matters is theunderlying P&L distribution, and the volatility and the VaR are simply twoways of summarizing the spread of that distribution. Sometimes one is more

Volatility (e.g., $130,800)

A. Volatility (standard deviation)

VaR = Y(e.g., $215,100)

Z = Area(e.g., 5%)

B. VaR

FIGURE 5.5 P&L Distribution for U.S. Treasury Bond,Showing Volatility and VaRReproduced from Figure 5.8 of A Practical Guide toRisk Management,# 2011 by the ResearchFoundation of CFA Institute.

Practical Risk Techniques 147

Page 167: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C05 02/14/2012 12:47:6 Page 148

useful, sometimes the other. Personally, I use the volatility more often, butmany people prefer to look at the VaR first.

For a normal distribution, there really is no difference between them, inthe sense that if we know one, we can always calculate the other. For a nor-mal distribution, they are equally useful and it is purely a matter of personalpreference which one is used. For a normal distribution, the 5%/95% VaRis 1.645 � the volatility, the 1%/99% VaR is 2.326 � the volatility, and the0.1%/99.9% VaR is 3.09 � the volatility. Most P&L distributions are notnormal and there will be some difference between the volatility and theVaR. They may give us different views of the distribution.

How would we use the VaR? In very much the same way as the volatil-ity, as a way of helping our gut determine whether we have too little or toomuch risk. For our bond example, the 5%/95% VaR is roughly $215,000,and so we should expect to see losses worse than that roughly 5 percent ofthe time, or one day out of 20. Can we live with this kind of loss? Again, wecan ask our gut whether we would be comfortable seeing such losses once amonth, or losses maybe two or three times larger on a yearly basis. If that istoo much, then the risk may be too high. If we would not even notice be-cause the portfolio is so large, then maybe the position is too small.

Two Uses f or Vo l a t i l i t y and VaR—Norma l Trad i n gversus Ex t reme Even t s

Both volatility and VaR are widely used in the finance industry. There aretwo related but somewhat divergent uses for these measures, and highlight-ing these two uses can clarify how and why we use them. Volatility and VaRare used for one or both purposes:

1. To standardize, aggregate, and analyze risk across disparate assets (orsecurities, trades, portfolios) under standard or usual trading conditions

2. To measure tail risk, or extreme events

Risk measurement texts often focus on the latter—tail events—but it isequally important to focus on risk under standard or usual trading condi-tions. Standardizing and analyzing risk across disparate assets and largeportfolios provides information necessary for understanding and managingrisk and P&L under standard trading conditions, which are, by definition,most of the time. Furthermore, analyzing risk under usual trading condi-tions provides valuable clues to performance under more extremeconditions.

When considering risk under standard or usual trading conditions, vol-atility can often be more useful than VaR. Volatility is particularly suitable

148 QUANTITATIVE RISK MANAGEMENT

Page 168: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C05 02/14/2012 12:47:6 Page 149

for measuring the spread of the central part of the distribution, exactly whatone is interested in when considering usual trading conditions. P&L will beoutside of �1s roughly 30 percent of the time and inside roughly 70 percentof the time, so the standard deviation gives a good feel for usual tradingconditions.

VaR is more commonly used for the second purpose—measuringextreme, or tail, events. In this context, VaR is sometimes referred to as thestatistically worst-case loss, but this is a horribly misleading idea. VaRshould be viewed as a periodically occurring event that, while not likely, weshould be perfectly comfortable with. We should think of VaR as providinga scale for possible large losses, not a maximum loss or worst-case scenario.It really is true in markets that, whatever our worst-case scenario, some-thing worse will happen sometime, somewhere.

We also must remember that, by their nature, tail events are rare, someasuring tail events is inherently difficult and open to large errors and un-certainty. As a result, when applied in this second sense, VaR must be usedcautiously and any conclusions treated with care. I have more to say aboutmeasuring tail events further on.

The two uses of volatility and VaR can never be precisely separated, butthe conceptual differentiation clarifies some of the uses, benefits, and limita-tions of volatility and VaR. For usual or normal trading conditions, stan-dard statistical and quantitative techniques work pretty well, and theinterpretation of results is relatively straightforward. Assuming normalityor linearity of the portfolio is often acceptable when considering the centralpart of the distribution, meaning that simple and computationally efficienttechniques can be used.

Measuring tail events, in contrast, is delicate, and the appropriate sta-tistical and quantitative techniques are often complex. In the tails, normal-ity is generally not an appropriate assumption and more sophisticatedstatistical assumptions and quantitative, numerical, and computationaltechniques must be applied. The inherent variability of tail events is gener-ally higher than for the central part of the distribution, and uncertaintycaused by estimation error and model error is larger. As a result, the estima-tion of VaR or other summary measures for tail events is inherently moredifficult, and the use and interpretation of results more problematic.

T ime Sca l i n g

The P&L distribution in Figure 5.2 is the P&L over some specified period.Our example is the P&L for one day, for other examples it might be for10 days, but it will always be the P&L for some period. We will often wantto know what the P&L variability might be for alternate periods. To get a

Practical Risk Techniques 149

Page 169: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C05 02/14/2012 12:47:7 Page 150

truly correct answer there is no substitute for a complete analysis, focusingspecifically on the period of interest.

Having said that, we should remember the value of simple and approxi-mate answers. There is in fact a simple way to approximately convert fromone period to another. Volatility and VaR for most financial assets growlike

ffiffi

tp

—in other words, the volatility for 10 days will beffiffiffiffiffiffi

10p ¼ 3.16 � the

one-day volatility.Why is it square root and not linear, as would seem natural? The an-

swer is somewhat subtle. When we look at changes over time—today, to-morrow, next day, and so on, and add them up, some days go up, somedays go down, and those changes cancel so that (roughly) the average netchange is zero—positives and negatives balance out. But remember that thevolatility is the square root of the variance, and the variance is the averageof squared changes. For the variance, positives and negatives do not balanceout or cancel. The variance is essentially a gross change.

In the simplest case, the squared changes add over time, so that the vari-ance grows linearly with time. In other words, the variance scales linearly.In mathematical and statistical terms, the variance is the more fundamentalconcept, but in practical applications we use the volatility (standard devia-tion) because it is much more intuitive. We care about P&L and changes inprices, and volatility is in the same units as P&L and prices, so it makessense to talk about and think about volatility. Because the variance (themore fundamental concept) scales linearly and volatility is the square rootof the variance, the volatility scales like square root.

Consider our example of the U.S. bond. The one-day volatility is$130,800, so the 10-day volatility would be about 130,800 � 3.6 �$413,300. The one-year volatility would be about 130,800 � ffiffiffiffiffiffiffiffi

255p �

$2,088,700.

5 .3 EXTREME EVENTS

The most difficult and vexing problem in quantitative risk measurement istrying to quantify tail, or extreme, events. Tail events are important becauselarge losses are particularly significant, and can, in truly extreme situations,wipe out a firm.

Measuring tail events is difficult for some fundamental reasons. First,tail, or extreme, events are by their nature rare and thus difficult to measure.By definition, we do not see many rare events, so it is difficult to measurethem reliably and to form judgments about them. Second, because of thescanty evidence, we are often pushed toward making theoretical assumptionsabout the tails of distributions (extreme events). Unfortunately, simple and

150 QUANTITATIVE RISK MANAGEMENT

Page 170: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C05 02/14/2012 12:47:7 Page 151

common assumptions are often not appropriate for tails. Most importantly,the assumption of normality is often not very good far out in the tails.

Although rare events are rare, they do occur, and measurements acrossdifferent periods, markets, and securities show that in many cases, extremeevents occur more often than they would if the P&L behaved according tothe normal distribution in the tails. This does not mean the normal distribu-tion is a bad choice when looking at the central part of the distribution, butit does mean that it can be a poor approximation when examining extremeevents.

Broadly speaking, three approaches can be taken when dealing with tailevents:

1. Simple rules of thumb2. Alternative but tractable distributional assumptions3. Extreme value theory, which focuses on the asymptotics of tail events

We will stay in the spirit of this chapter and talk only about a simplerule of thumb, leaving the other ideas for later chapters.

Ru l e o f Thumb for Ta i l E ven t s

Using simple rules of thumb may not sound sophisticated, but it is, in fact, asensible strategy. Litterman (1996), speaking about Goldman Sachs, says,‘‘Given the non-normality of daily returns that we find in most financialmarkets, we use as a rule of thumb the assumption that four-standard-deviation events in financial markets happen approximately once per year’’(p. 54). We can interpret this statement probabilistically from three differ-ent perspectives (all equivalent but each giving a different viewpoint):

1. If daily returns were normal, a once-per-year event would be about 2.7standard deviations. Litterman’s rule of thumb is to assume that actualonce-per-year changes are 4.0 standard deviations or 1.5 times largerthan changes that would occur if events were normally distributed(4.0s instead of 2.7s). This seems a significant but not extremeassumption.

2. If daily returns were normal, a four-standard-deviation event wouldhave a probability of about 0.0032 percent, which would make itroughly a once-per-125-year event (1/0.000032 ¼ 31,250 days, orabout 125 years), whereas Litterman’s rule of thumb says it is a once-per-one-year event. This seems a much more radical assumption—instead of four-standard-deviation events occurring once every125 years, he says they occur once every year.

Practical Risk Techniques 151

Page 171: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C05 02/14/2012 12:47:7 Page 152

3. If we assume that four-standard-deviation events occur once per year,the probability of a four-standard-deviation event is about 0.39 percent(1/255) instead of 0.003 percent. (This is the same as the second viewbut stated in probabilities rather than ‘‘once per x years.’’)

Stated the first way, the rule of thumb has intuitive appeal—largemoves are 1.5 times larger than they would be if the P&L were normallydistributed. There is, in fact, good evidence that returns and P&L in finan-cial markets are not normal and have fatter tails (more large moves) thanpredicted by a normal distribution. A factor like 1.5 is not huge. For theU.S. bond we have been considering, 2.7 standard deviations (the predictionassuming normality) would be $353,000, while four standard deviations(the rule of thumb) is $523,000. A loss of $523,000 is clearly worse than$353,000 but not enormously worse. Knowing that financial markets havemore large losses than predicted by normality, using $523,000 instead of$353,000 seems reasonable.

Only from the latter two perspectives, viewed as probability statements,does the assumption appear extreme. The first view, looking at how muchlarger losses might be than predicted by normality, is preferable. We see inlater chapters that when we apply alternative mathematical assumptions toanalyzing tails events, the probabilities start to become more reasonable.Our intuition seems to work reasonably well when applied to loss levels(‘‘once-per-year losses are 1.5 times worse than implied by normality’’) butless well when applied to probabilities and the normality assumption (‘‘once-per-year events are probability 0.003 percent instead of 0.39 percent’’).

This may, in fact, be another example of Gigerenzer’s (2002) point aboutthe importance of heuristics and how we pose probability problems. It may bethat our intuition is well adapted to thinking about the levels of losses. Anyonewith experience in the financial markets learns, sometimes the hard way, thatgains and losses do in fact have fat tails. It may be that we can work intuitivelywith extreme events when expressed in levels of losses but that working withprobabilities requires more effort. This is not to write off the value of formalprobabilistic analysis. The formal analysis lays the foundations that providejustification for the simple and intuitive approximations. The point is that sim-ple approximations are valuable and have their own place.

This rule of thumb is simple and robust—something that is easily un-derstood, easily communicated, and easily used. It is not perfect and wecannot say with precision how reliable it is, but as a simple approximationit serves its purpose—making sure we recognize that losses can occur, andthat they are often larger than simple theory would lead us to think.

The simplicity of this rule is itself a huge advantage. Primary attentionremains focused on measuring the portfolio volatility, or the behavior

152 QUANTITATIVE RISK MANAGEMENT

Page 172: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C05 02/14/2012 12:47:7 Page 153

during standard trading conditions. Doing so is often a difficult task in it-self. Collecting portfolio positions, making reasonable judgments about vol-atility of individual positions, understanding the interaction of variouspositions and how these affect the overall portfolio volatility—all of thesecan be extremely difficult tasks. Given the paucity of observations in thetails, a simple rule of thumb, such as ‘‘four-standard-deviation events hap-pen once per year,’’ is a useful supplement to more sophisticated, but morecomplex, approaches.

This simple rule of thumb corresponds to what is often done in practice,which is to estimate the volatility of the P&L distribution and then assumethat the VaR is larger by a fixed factor. The factor is often determined byassuming that the P&L distribution is normal (giving a factor of 2.7 for aonce-per-year event), but here the factor is assumed to be larger by an adhoc amount (4.0 instead of 2.7). Conceptually, the approach is to split theproblem into two parts—first estimating the scale of the distribution (gener-ally by the standard deviation or volatility) and subsequently focusing onthe tail behavior. This strategy can be very fruitful because the scale of thedistribution and the tail behavior can often be analyzed separately.2

5 .4 CALCULAT ING VOLAT I L I TY AND VaR

So far we have been talking about volatility, VaR, and the P&L distributionas if we already knew the distribution, as if somebody gave it to us. This isclearly not the case. We have to estimate them, and that is never easy.

I leave a detailed discussion of the topic to Chapters 8 and 9. It is im-portant, however, to know some of the terms. There are three widely usedmethods for estimating volatility and VaR: parametric (also called linear,delta normal, or variance-covariance), historic simulation, and MonteCarlo. There are important differences between them and we discuss someof the pros and cons in Chapter 8 and run through an example in Chapter 9.For the moment, the similarities are more important.

Whatever approach we take, the P&L distribution is the fundamentalentity. We may talk about volatility and VaR but these simply summarizethe distribution itself. As always, it is good to recognize the modesty of ourtools. We would like to know what the P&L distribution will be tomorrow,

2Take the simple example of owning $1 million versus $100 million of a U.S. Trea-sury bond. The scale of the distribution will be very different, but the shape of thedistribution will not change. The tail behavior—for example, the ratio of the VaR tothe volatility—will be the same because it is determined by the market risk factor(say, the yield) and the shape of the distribution, not the size of the holding.

Practical Risk Techniques 153

Page 173: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C05 02/14/2012 12:47:7 Page 154

but that is a vain hope; we can only estimate what it was in the past andthen assume or hope that it will be similar in the future. Nonetheless, under-standing how the portfolio would have behaved is extremely informativeand the first step toward understanding how it might behave going forward.

The goal of any approach for estimating the P&L distribution is to esti-mate how the current portfolio would have behaved under a variety of con-ditions, the conditions invariably based on history. In other words, we needto estimate the P&L of the current portfolio under a variety of marketconditions.

It is generally useful to think of the portfolio P&L as resulting from twocomponents:

1. External market risk factors2. Positions—that is, the firm’s holdings and the security characteristics

that determine the sensitivity to risk factors

The distribution of risk factors and the firm’s exposure to those risk fac-tors is combined to obtain the distribution of P&L.3

Rather than undertake a detailed discussion of different methodolo-gies—that is for later chapters—we will go through an example of calculat-ing approximate volatility. We basically covered this in Chapter 1. Now,however, we recognize that risk measurement is focused on estimating theP&L distribution. So why do we estimate just the volatility?

If we are willing to assume the P&L distribution is normal, and such anassumption is not perfect but usually a very good starting point, then weneed to calculate only the volatility. Remember that a normal distributionis completely described by the volatility (and the mean, but that will oftenbe close enough to zero that we can ignore it). So when we know the volatil-ity, we know the whole distribution.4

We start with the $20 million holding of the U.S. Treasury bond wehave been looking at. The bond volatility will be the result of combining theexposure of the bond to market risk factors, combined with the volatility ofmarket risk factors. For the bond, a simple but useful exposure measure isthe DV01—the bond sensitivity to a one basis point (bp) move in yields (seeColeman 2011b). The DV01 of the 10-year bond is roughly $915 per bp foreach $1 million notional—the bond value will fall by roughly $915 when

3As Jorion (2007, 247) nicely expresses it: ‘‘The potential for losses results fromexposures to the risk factors, as well as the distribution of these risk factors.’’4 The approach we are using here—assuming the P&L distribution is normal andestimating the volatility—is the parametric approach that we discuss in more detailin Chapters 8 and 9.

154 QUANTITATIVE RISK MANAGEMENT

Page 174: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C05 02/14/2012 12:47:7 Page 155

yields go up by one basis point (say, from 2.53 percent to 2.54 percent). For$20 million notional, the DV01 is roughly $18,300. For small yieldchanges, price changes are roughly proportional to yield changes:

DP � �DV01� DY

and so the volatility of prices is approximately the DV01 times the volatilityof yields:

VolðDPÞ � �DV01 � VolðDYÞThe volatility of yield changes was roughly 7.15 bp per day as of Janu-

ary 2009 (the date we are using for all the examples here). We could esti-mate this by looking at history for 30 or 100 or 500 days using data fromBloomb erg or Yahoo finance (htt p://financ e.yaho o.com/) or the Federal Re-serve (www.fede ralrese rve.gov/ release s/h15/u pdate/)— always rem embe ringto look at daily changes in yields. Simply looking at history would not give aprecise estimate but it would give us a rough idea, and a rough idea is whatwe care about here. In any case, using the 7.15 bp per day, the volatility ofthe bond is roughly:

Bond Volatility � 18;300� 7:15 � $130;800

This provides valuable information, as we outlined earlier. With this,we can say that such a position would probably make or lose more than$130,800 every third day. Is this a lot? It depends. For an individual inves-tor with total wealth of $500,000, it would be, representing 26 percent ofthe wealth. For a portfolio with $500 million of assets under management,it would be small, representing only 0.03 percent of the portfolio.

This kind of rough volatility estimate becomes really useful when wewant to compare this bond against some other security, say with an equityfutures such as the CAC equity index futures.

Consider adding a D7 million long futures position in the CAC 40 in-dex to the $20 million bond position (when the $:D was 1.30 so thatD7 million corresponded to $9.10 million). These are the positions consid-ered in Chapter 1. They are, however, very different positions: the first is astraightforward purchase of a simple bond denominated in U.S. dollars, andthe second is a derivatives position in a euro-denominated equity index withno up-front investment. One is a traditional security, the other a derivativesecurity. One is fixed income, the other equity. Which is riskier?

How can we compare and aggregate the risk of these two quite dispar-ate trades? We cannot look at the nominal amount invested because thebond is a $20 million investment and the futures position involves roughly

Practical Risk Techniques 155

Page 175: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C05 02/14/2012 12:47:7 Page 156

zero investment. They are in different currencies and they are different assetclasses. Furthermore, any trader or manager with extensive experience inone would not be likely to have deep familiarity with the other, so relyingjust on experience and common sense will likely not work.

Both trades, however, have one common denominator: the P&L.Money is money, profits are profits, and losses are directly comparable be-tween the two. (One must, of course, remember to express both in the samecurrency, either dollars or euros.) We have calculated a rough estimate ofthe bond volatility and we can do the same for the CAC index futures. Thisis even easier than for the bond. We simply estimate the volatility of theCAC index (again, we can go to Bloomberg or Yahoo finance for history).The volatility of percent changes as of January 2009 was roughly 40 per-cent, which translates into a daily volatility of percent changes of 2.536 per-cent. In other words, a holding of $9.10 million would have a volatility of2.536 percent, or roughly $230,800.

To emphasize that it is the P&L distributions that really matter,Figure 5.6 shows the P&L distributions for these two trades, assumingthe P&L is distributed normally with daily volatilities of $130,800 and$230,800. The P&L is not exactly normal but for a rough comparisonof standard trading conditions, this assumption is perfectly good—itgets us most of the way to the truth.

The distribution for the U.S. bond in Panel A is narrower (less dis-persed) than for the CAC index in Panel B. The daily volatility for the bondis $130,800, and for the CAC index futures, it is $230,800. This figure pro-vides an easy and direct comparison between the two. Panel C shows thetwo distributions overlaid, and we can say that the CAC futures position isriskier because the distribution is more dispersed (and both are centeredaround zero).

Mu l t i p l e Asse t s

We usually need to go beyond comparing one asset against another. Whenwe combine assets into a portfolio, we need to ask, ‘‘What is the overallportfolio volatility?’’ Combining distributions and volatilities across assetscan get complicated and usually needs data, computers, and programming.We talk about this in detail in later chapters. The important point here isthat there is a method to combine the distributions in a reasonable way,and that we can use the combined P&L distribution to build the same kindof intuition that we did earlier for the bond alone.

In our example with the bond and the CAC futures, the combined vola-tility is roughly $291,000. Is this large? Is it small? We have to ask, ‘‘Are wecomfortable with P&L worse than –$291,000 or better than þ$291,000

156 QUANTITATIVE RISK MANAGEMENT

Page 176: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C05 02/14/2012 12:47:7 Page 157

every third day?’’ Or (using Litterman’s rule of thumb that we shouldexpect to see 4-s days roughly once per year) ‘‘Are we comfortable withlosses of $1.16 million once per year?’’ What matters here is that we aregoing to treat the overall portfolio P&L distribution the same as we did forthe bond alone.

Mean = 0Volatility $130,800

Mean = 0$0Volatility $230,800

A. P&L Distribution for Bond (standard deviation $130,800)

B. P&L Distribution for Equity Futures (standard deviation $230,800)

ProfitsLosses

Bond

Equity

C. P&L Distributions for Bond and Equity Futures

FIGURE 5.6 P&L Distribution for Bond and Equity FuturesComparedReproduced from Figure 5.7 of A Practical Guide to Risk Man-agement,# 2011 by the Research Foundation of CFA Institute.

Practical Risk Techniques 157

Page 177: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C05 02/14/2012 12:47:8 Page 158

5 .5 SUMMARY FOR VOLAT I L I TY AND VaR

Volatility and VaR both measure spread or dispersion of the P&L distribu-tion. It is the P&L distribution that matters—how likely we are to see lossesversus gains. We use the volatility and VaR to summarize the spread of thedistribution and to build intuition for risk, but it is actually the distribution,such as displayed in Figure 5.2, that matters.

For risk, the most important characteristic of the distribution is thespread or dispersion. Volatility and VaR are simply two different numbersthat summarize the dispersion. (For a normal distribution. they are inter-changeable, otherwise they show us slightly different views.) There is noth-ing magic about volatility and VaR. When you truly understand what theymean, you see that they are incredibly simple.

The importance of the numbers is what we do with them—‘‘It’s not thefigures themselves, it’s what you do with them that matters.’’ How do weuse them to inform our decisions? How do we use them to get a gut feelingfor the risk?

Table 5.1 provides a cheat sheet for volatility and VaR. What is reallyimportant is how we use volatility and VaR. We need to understand whatthey are trying to tell us.

5 .6 PORTFOL I O TOOLS

Think of volatility and VaR as the standard quantitative risk measurementtools. They help us understand the size of the risk. But they don’t tell us wherethe risk comes from or how we can alter it. They are flat in the sense of tellingus the size but nothing about the composition. We need tools for understandingthe composition of the risk—where it comes from and how we can change it.

There are two basic problems in understanding risk:

1. Risks combine in a nonlinear, often nonintuitive manner. Risks don’tjust add. Sometimes two risks add, sometimes they subtract. It’s not ob-vious, without some work, how different risks will add or subtract.

2. For more than two risks, it becomes impossible to understand withoutquantitative tools to aid our understanding. A large portfolio can be socomplex that we need simple, straightforward tools to help untangle therisk and point us in the direction of how to manage that risk. We needtools for drilling down to uncover the sources of risk.

Litterman (1996) expresses this well:

Volatility and VaR characterize, in slightly different ways, the de-gree of dispersion in the distribution of gains and losses, and

158 QUANTITATIVE RISK MANAGEMENT

Page 178: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C05 02/14/2012 12:47:8 Page 159

TABLE5.1

SomeSuggestionsforUsingVolatility

andVaR

Measure

Probability

Intuition

Volatility,s

Betterorworse,

30%

Thereisroughly

a30%

chance

ofprofitbetterthanþs

orloss

worsethan–s,so

weshould

expectbigger

P&Lroughly

onedayoutofthree.Soifvolatility

is$100,000,theP&L

should

beworsethan–100,000orbetterthanþ1

00,000roughly

onedayoutofthree.

Volatility,s

Worse:15%

Thereisroughly

a15%

chance

ofloss

worsethan–s,so

weshould

expectlosses

worsethan

thisroughly

onedayoutofseven.

5%

/95%

VaR

Worse:5%

Thereisroughly

a5%

chance

ofloss

worsethanthis,so

weshould

expectworselosses

roughly

onedayoutof20.Butbealittlecautious—

taileventsare

hard

tomeasure.

1%

/99%

VaR

Worse:1%

Thereisroughly

a1%

chance

ofloss

worsethanthis,so

weshould

expectworselosses

roughly

onedayoutof100.Butbequitecautious—

taileventsare

hard

tomeasure.In

fact,

thesm

aller

theprobability,themore

cautiousyouwantto

be;theless

youwantto

believe

thenumber

asanythingbutaroughguide.

Extrem

eevents

4soccurs

once

per

year

SuggestedbyLitterm

an,thisruleofthumbassumes

thatfour-standard-deviationevents

happen

approxim

ately

once

per

year.

Tim

escaling

ffiffiffi dp

Togofrom

one-dayvolatility

(orVaR)to

d-dayvolatility,multiply

by

ffiffiffi dp.Soto

gofrom

one-dayto

10-day,multiply

by

ffiffiffiffiffiffi

10

3.16.

Volatility

comparison

Compare

withother

assets,assetsthatwehaveexperience

with.Itisoften

usefulto

measure

volatility

asapercentageofone’sportfolio—

soforabondwiths¼

$130,800and$20

millioninvestm

ent,thisisroughly

0.654%

per

day.Scaleto

annual:0.654�

ffiffiffiffiffiffiffiffi

255

0.654�15.97¼10.4%

.Compare

thiswith20%

to25%

volatility

forS&Pindex—

bonds

are

less

volatile.

159

Page 179: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C05 02/14/2012 12:47:8 Page 160

therefore are useful for monitoring risk. They do not, however,provide much guidance for risk management. To manage risk, youhave to understand what the sources of risk are in the portfolio andwhat trades will provide effective ways to reduce risk. Thus, riskmanagement requires additional analysis—in particular, a decom-position of risk, an ability to find potential hedges, and an ability tofind simple representations for complex positions. (p. 59)

There are three main tools we discuss here that are useful for under-standing the sources of risk:

1. Contribution to risk2. Best hedges3. Replicating portfolios

Many of these ideas are based on Robert Litterman’s Hot Spots andHedges (Litterman 1996), some of which also appeared in Riskmagazine inMarch 1997 and May 1997. The idea of contribution to risk was developedindependently by Litterman and M. B. Garman (Riskmagazine 1996).

These tools can give a view into a portfolio. They are anything but per-fect, but remember that the manager who understands today the broad con-tours of the portfolio’s risk is better off than the manager who understandsnext year the exact details (after the business has blown up). A simple ap-proach can provide powerful insights where it is applicable and many, evenmost, portfolios are locally linear and amenable to these techniques. Again,Litterman (1996, 53) summarizes the situation well:

Many risk managers today seem to forget that the key benefit of asimple approach, such as the linear approximation implicit in tradi-tional portfolio analysis, is the powerful insight it can provide incontexts where it is valid.

With very few exceptions, portfolios will have locally linearexposures about which the application of portfolio risk analysistools can provide useful information.

Marg i n a l Con t r i bu t i o n

Risk is not additive. Volatility and VaR for different assets sometimes add,sometimes subtract. Because they are not additive, understanding where therisk comes from can be difficult. We would like to decompose or breakdown the overall volatility into contributions resulting from different riskfactors or assets or subportfolios. (I focus on volatility, but pretty much

160 QUANTITATIVE RISK MANAGEMENT

Page 180: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C05 02/14/2012 12:47:8 Page 161

everything applies equally to VaR.) We would like to be able to say some-thing like: ‘‘The overall portfolio volatility is $291,300. Thirty percentcomes from the bond, seventy percent from the equity futures.’’

We cannot use the individual asset volatilities because they simply donot add. In our example of the bond and the futures, the bond volatility is$130,800 and the equity futures is $230,800. They add to $361,600, notthe $291,300 portfolio volatility. The problems mount for complexportfolios.

It turns out, however, that the overall volatility does decompose intoadditive components, but these components are not the asset volatilities;they are instead something we will call the marginal contribution or contri-bution to volatility (or VaR or risk). We go over the formulae in some detailin later chapters but here we only care that there is some way to consistentlysplit the overall volatility into additive components resulting from individ-ual assets or risk factors.

We have to be a little careful, however. Just because we have a formulato split the volatility into additive components does not mean those compo-nents actually tell us anything useful. For example, we could use a rule thatarbitrarily splits the overall volatility equally among all assets or risk fac-tors: With 10 risk factors we assign one-tenth of the volatility to each riskfactor. This is a simple rule, but also useless.

The beauty of the marginal contribution is that it decomposes the port-folio volatility into meaningful components. It tells us something really use-ful: how small changes in individual risk factors or assets contribute to thechange in the overall portfolio volatility. (Thus the term marginal, denotingsmall changes or changes at the margin.) We can say, ‘‘The overall portfoliovolatility is $291,300. When all positions change by 1 percent, the overallvolatility also changes by 1 percent or $2,913. Thirty percent, or $836,comes from the bond; 70 percent, or $2,077, from the equity futures.’’ Thisis an incredibly useful view into the overall risk. The marginal contributionbreaks the volatility down into components, components that are additiveand tell us how the overall volatility responds to small percentage changesin positions.

The marginal contribution is particularly useful for large and complexportfolios. Such portfolios are particularly hard to understand, and theytend to change incrementally, with relatively small changes in individualpositions.

We can examine the marginal contribution to volatility using our exam-ple of the U.S. Treasury bond and the CAC equity futures. Table 5.2 showsthe individual asset volatilities and the marginal contributions to volatility.The total volatility decomposes into roughly 30 percent due to the bond and70 percent due to the equity futures. This is a really useful decomposition

Practical Risk Techniques 161

Page 181: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C05 02/14/2012 12:47:8 Page 162

because it tells us that even though the notional amount of the bond is muchlarger, the CAC futures contributes most of the risk, and changes in theCAC futures position would have much more impact on the portfolio riskthan changes in the bond position.

For this simple portfolio, the individual position volatilities might tellus much the same—the CAC is more important—but that would not workfor large complex portfolios. In such a case, the marginal contributioncomes into its own.

A l l - o r -No t h i ng Con t r i bu t i o n

The marginal contribution tells us just that—what is the contribution to theportfolio volatility for a small change, at the margin, to a position. Thereare times, however, when we want to ask a different question—what is thecontribution due to the whole position? How would the volatility change ifwe completely removed a particular position? Conceptually, this is verysimple, because it just means recalculating the portfolio volatility with oneparticular position removed. (There are more efficient ways to calculate it,which we discuss in later chapters.)

This is the all-or-nothing contribution—how much the volatilitychanges when the position is set to zero. In my career, I have generallyfound this less useful than other measures. The marginal contribution is forsmall changes only but is additive, so it provides a useful decomposition ofthe volatility. For large changes in position, I find that the best hedge

TABLE 5.2 Volatility for Simple Portfolio with Contribution to Risk

Marginal Contribution

ItemPositionVolatility

Proportional [v2is

2i þ

rvisivjsj]/s2p

Level [v2is

2i þ

rvisivjsj]/sp

þ$20M UST10-year bond

$130,800 28.70% $83,600

þD7M CACfutures

230,800 71.30 207,700

Portfolio 291,300 100.00 291,300

Notes: This shows the position volatility (stand-alone volatility), the portfolio vola-tility, and the contribution to volatility (marginal contribution) for a portfolio con-sisting of $20 million 10-year U.S. Treasury bond and D7 million CAC index equityfutures. Based on Table 5.4 from A Practical Guide to Risk Management, # 2011by the Research Foundation of CFA Institute.

162 QUANTITATIVE RISK MANAGEMENT

Page 182: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C05 02/14/2012 12:47:8 Page 163

position and best replicating portfolio, discussed next, provide more usefulinformation. Having said that, the point of these tools is to build intuitionand understanding around the portfolio risk, and different people will havedifferent tastes and find different measures more or less useful.

Term i no l o gy

Before we leave the topic of marginal contribution and all-or-nothing contri-bution, we have to address an annoying problem: inconsistent terminology.This may seem trivial, but it actually serves as a serious impediment to betterunderstanding and use of these tools, particularly marginal contribution.Some authors use the term marginal contribution as I do here, some use adifferent term, and still others, rather oddly, use marginal to refer to the all-or-nothing contribution. When a risk professional talks about marginal con-tribution, one needs to be careful about what he means. The lack of consist-ent terminology can lead to potential confusion and misunderstanding.

Table 5.3 provides a guide to the various terms used by different writ-ers. Particularly confusing is that the RiskMetrics Group uses the wordmar-ginal for the all-or-nothing measure (even though the word marginal iscommonly used to denote small changes at the margin and not large, finitechanges) and uses the word incremental for the infinitesimal measure (argu-ably also at odds with common usage of the word incremental). Most of theliterature uses the reverse terminology. Nor, unfortunately, are texts alwaysclear in their explanation of the formulas or concepts.

TABLE 5.3 Terms for Contribution to Risk

Source Infinitesimal All or Nothing

This book Marginal contribution orcontribution to risk

All-or-nothingcontribution to risk

Litterman (1996) Contribution to riskCrouhy, Galai, andMark (2001)

Delta VaR Incremental VaR

Marrison (2002) VaR contributionMina and Xiao/RiskMetrics (2001)

Incremental VaR Marginal VaR

Jorion (2007) Marginal VaR andcomponent VaR

Incremental VaR

Reproduced from Exhibit 5.2 of A Practical Guide to Risk Management,# 2011 bythe Research Foundation of CFA Institute.

Practical Risk Techniques 163

Page 183: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C05 02/14/2012 12:47:8 Page 164

Bes t Hedge Pos i t i o ns and Rep l i c a t i n g Por t f o l i o s

The final tools for understanding portfolio risk are the related ideas of a besthedge and replicating portfolio. To start, consider a single asset currently inthe portfolio. We can ask, ‘‘Using this one asset alone, what is the positionthat would best hedge the portfolio or optimally replicate the existing port-folio?’’ The best hedge for asset A is the position in A that, when combinedwith the existing portfolio, reduces the portfolio volatility as much as possi-ble. The mirror of that position provides an optimal replicating portfolio.

This is much easier to see with an example. Continuing with the simpleportfolio of a $20 million Treasury bond and D7 million CAC equityfutures, Table 5.4 continues from Table 5.2. For the CAC futures, the ‘‘BestHedge Position’’ is short D0.95 million. This says that if we want to choosean amount of CAC futures that provides the best hedge for the whole rest ofthe portfolio (in this case the ‘‘rest of the portfolio’’ is only the bond, but ingeneral it would be a whole set of positions) we would need to be shortD0.95 million.

The terminology can be a little confusing. We have to be clear whetherour best hedge position is net (netting off with the position in the originalportfolio) or gross (new position that must be added to the existing portfo-lio). In Table 5.4, the D0.95 million is net—the absolute position that is thebest hedge. In many ways, however, the gross position of D7.95 million—the new position that would have to added—is more useful. Short D7.95million is the new hedge we would have to put on to hedge the existingportfolio (the original D7 million would then net out to leave the figureshown in the table). The mirror, long D7.95 million, would be the best

TABLE 5.4 Volatility for Simple Portfolio with All-or-Nothing Contribution andReplicating Positions

Position

(stand-alone)

Volatility

All-or-

Nothing

Contributionn

Best

Hedge

Position

Replicating

Position

Volatility

at Best

Hedge

%

Volatility

Reduction

$20.0M 10yr

UST130,800 60,490 �8.47 28.5 224,100 23.1

D7.0M CACEquity Index

230,800 160,600 �0.95 7.95 126,900 56.4

Portfolio

Volatility291,300

Notes: This shows the position volatility (stand-alone volatility), the all-or-nothingcontribution to portfolio volatility, and the replicating positions for a portfolio con-sisting of a $20 million 10-year U.S. Treasury bond and D7 million CAC indexequity futures.

164 QUANTITATIVE RISK MANAGEMENT

Page 184: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C05 02/14/2012 12:47:9 Page 165

replicating portfolio or the best representation of the full portfolio usingonly the CAC futures.

When we calculate the best hedge we would also like to know how goodthat hedge is—how much it would change the portfolio volatility. Table 5.4shows the volatility at the best hedge position. For the CAC, this shows whatthe portfolio volatility would be if we added a short D7.95 million CAC po-sition to the existing portfolio. We can then calculate how much the volatilityis reduced, shown in Table 5.4 as a percentage.

We can calculate a ‘‘Best Hedge Position’’ for any single instrument inour portfolio. In Table 5.4, we have done that for both the bond and theCAC futures. The best hedge positions are valuable because they help usunderstand how the portfolio behaves, by comparing the portfolio withsimple (single-asset) portfolios.

We can also go one step further, and ask, ‘‘What is the top best hedge?’’Slightly clumsy wording, but the goal is to see what single position among allpossible choices would be the best portfolio hedge. For our simple example,we have only two possibilities, but in general we would have more. Table 5.4shows that the top best hedge position would be the CAC futures.

We can use this information and build intuition in two ways. First, wecan say that short D7.95 million CAC futures would provide the best hedgeto the existing portfolio. This could help in an emergency situation, if weneeded to quickly reduce the risk but could only transact in a limited uni-verse of liquid securities.5

Second, we can say the existing portfolio behaves most like long D7.95million of CAC futures. This gives us a simple way to describe how the port-folio behaves. This helps build our intuition about the portfolio by summa-rizing it in a simple way. The idea can also be extended to multiple assets,for which the idea of a replicating portfolio can become even more useful.

Before turning from the topic of best hedges, we need to ask whetherthe best hedge positions tell us anything different from the marginal contri-bution. In the simple portfolio shown in Table 5.4, the best hedge does notreally tell us anything more than the marginal contribution—the CACfutures has the highest marginal contribution and is also the top hedge.

For large, complex portfolios, however, marginal contribution and besthedges will tell us different information. To see how this can happen and

5We would, however, have to be very careful in using just a single asset to hedge thewhole portfolio. A single asset will not usually provide a good hedge and we need tolook carefully at how much the single hedge might reduce the volatility. We alsohave to be cognizant that the risk reduction potential during extreme circumstancescould be different from during normal times. We would have to use the numbers inTable 5.4 with caution.

Practical Risk Techniques 165

Page 185: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C05 02/14/2012 12:47:9 Page 166

what marginal contribution versus best hedges tells us, say that we added toour portfolio $40 million of five-year U.S. Treasury bonds. The CAC futureswould still provide the biggest contribution to volatility, roughly doubleeither of the other bonds. Tables 5.5 and 5.6 show the marginal contribution.

The marginal contribution for the CAC futures is so large because theCAC futures are more volatile than either the 10-year or the 5-year bonds.For either the 10-year or the 5-year bonds, a small change in position (eitherone on its own) contributes less than a small change in the CAC futures.But the CAC futures are no longer the top hedge—the top hedge is now oneof the two bonds (see Table 5.6). This also makes sense. The two bonds

TABLE 5.6 Volatility for More Complex Portfolio with All-or-NothingContribution and Replicating Positions

Position

(stand-alone)

Volatility

All-or-

Nothing

Contribution

Best

Hedge

Position

Replicating

Position

Volatility

at Best

Hedge

%

Volatility

Reduction

$40.0M 5-yr

UST131,100 94,430 �54.4 94.4 230,400 40.3

$20.0M 10-yr

UST130,800 91,520 �26.4 46.4 238,300 38.2

D7.0M CAC

Equity Index230,800 130,900 �2.01 9.01 246,100 36.2

Portfolio

Volatility385,700

Notes: This shows the position volatility (stand-alone volatility), the all-or-nothingcontribution to portfolio volatility, and the replicating positions for a portfolio con-sisting of $40 million 5-year U.S. Treasury bonds, a $20 million 10-year U.S. Trea-sury bond, and D7 million CAC index equity futures.

TABLE 5.5 Volatility for More Complex Portfolio with Contribution to Risk

Position (stand-alone)Volatility

MCProp’l

MCLevels

$40.0M 5-yr UST 131,100 0.273 105,100$20.0M 10-yr UST 130,800 0.267 102,800D7.0M CAC Equity Index 230,800 0.461 177,800Portfolio Volatility 385,700 1 385,700

Notes: This shows the position volatility (stand-alone volatility), the portfolio vola-tility, and the contribution to volatility (marginal contribution) for a portfolio con-sisting of $40 million 5-year U.S. Treasury bonds, a $20 million 10-year U.S.Treasury bond, and D7 million CAC index equity futures.

166 QUANTITATIVE RISK MANAGEMENT

Page 186: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C05 02/14/2012 12:47:9 Page 167

usually move together so that, in a vague sense, they behave as the samesecurity. The 10-year bond will act as a very good hedge to the 5-year bondand vice versa. Since this portfolio is more heavily weighted toward bondsthan the portfolio in Table 5.4, the portfolio behaves more like a bond port-folio. The best replicating portfolio will be a bond rather than the equity,and, in fact, the 5-year is the best (with the 10-year almost as good).

Rep l i c a t i ng Por t f o l i o f o r Mu l t i p l e Asse t s

We can easily extend the idea of a replicating portfolio to multiple assets.We might ask, ‘‘Using only five assets, what five assets would best replicatethe existing portfolio?’’ The mirror of this replicating portfolio will, ofcourse, be a hedging portfolio, one that if executed would best reduce thevolatility.

The replicating portfolio can be a valuable tool because it gives a simplesummary of the portfolio using a small number of assets. When the replicat-ing or hedging portfolio gives a large reduction in volatility, then we knowthat the replicating portfolio gives a simple and a useful summary of theportfolio. Such a summary can serve both to help managers understandhow the portfolio behaves and to communicate the makeup of the portfolioto outside constituencies without disclosing details of the underlyingportfolio.

Our simple two-asset portfolio is so simple that we cannot really givean example of a multiple-asset replicating portfolio. The idea comes into itsown for large and complex portfolios. We return in Chapter 10 to the ideaof replicating portfolios and examine a more complicated portfolio.

5 .7 CONCLUS I ON

This chapter has aimed to explain some of the basic tools used in quantita-tive risk measurement—volatility and VaR to measure the size of risk, andmarginal contribution and best hedges to understand the composition ofrisk. These tools are important but they are not the only tools, and I haveonly outlined the intuition and have not laid out the technical foundations.The focus has been on how to use and think about these tools, with little orno attention directed toward how to estimate or calculate them.

Later chapters cover the details, the formulae, and the calculations nec-essary to produce the numbers. Chapter 8 concentrates on the formulae andtechnicalities behind volatility and VaR, while Chapter 9 applies these to asimple portfolio to make the concepts and calculations concrete. Chapter 10turns to the portfolio tools of marginal contribution, best hedges, and so on.

Practical Risk Techniques 167

Page 187: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C05 02/14/2012 12:47:9 Page 168

Page 188: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C06 01/31/2012 9:27:8 Page 169

CHAPTER 6Uses and Limitations ofQuantitative Techniques

We have now finished our examination of Risk Management: how weshould think about risk, the proper role of management within risk

management, and some of the intuition behind the numbers. In the follow-ing chapters we turn to Quantitative Risk Measurement; we tackle themathematics and the technical details. What is the definition of volatilityand VaR, what is the contribution to volatility, how do we handle fat tails?We need to get these details right. That is not to say we always need theperfect answer—an approximation that tells us 90 percent of the answertoday is better than the perfect answer that arrives too late. But we do needto be careful, understanding the technical details well enough to separatewheat from chaff and apply reasoned judgment and common sense to thetechnical details.

I hope the following chapters are valuable to a wide range of readers.They are, of course, primarily aimed at quantitative users whose job it is tounderstand and produce the numbers. But I do not want to dissuade less-technical readers from perusing some of the chapters. I have tried to providethe intuition behind the mathematics at the same time as giving the formu-lae. I have illustrated many of the quantitative tools with examples, particu-larly in Chapters 9 and 10, and these should be accessible to all readers.

Before turning to the technical chapters, however, it is worthwhile toreview some of the limitations of quantitative techniques. Such a reviewrightly falls in this first section, firmly under risk management rather thanrisk measurement, because managers need to appreciate not only the powerbut also the limitation of quantitative techniques. Quantitative techniqueswork best in the hands of those who understand the techniques but who arealso keenly aware of the limits and boundaries of what these techniques canprovide. A deep appreciation of the limitations gives the user the confidenceto rely on the techniques when appropriate and the good sense to turn

169

Page 189: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C06 01/31/2012 9:27:8 Page 170

elsewhere when necessary. Like most helpful tools, these techniques workwell when used properly, and the key is to understand their limitations inorder to avoid misusing them. The real risk to an organization is in theunanticipated or unexpected—exactly what quantitative measures captureleast well.

6 .1 R I SK MEASUREMENT L IM I TAT I ONS

Like any set of techniques or tools, risk measurement has definite limita-tions. This is not a problem; it is just the way the world is. A hammer is auseful tool, but it has limitations. It is good for pounding in a nail but notgood for sawing a plank. Appreciating risk measurement limitations helpsus understand when and where quantitative techniques are (and are not)useful. Failure to understand the limitations of risk measurement tech-niques, however, is a problem. Misusing the techniques in the face of limita-tions leads to mistakes, misunderstandings, and errors.

Mode l s f or Measur i ng R i sk W i l l N o t I nc l u de A l lP os i t i o ns and A l l R i sks

The models used to measure VaR, volatility, or whatever else will never in-clude all positions and all risks. Positions may be missed for a variety ofreasons. Perhaps some legacy computer system does not feed the main risksystem, or some new system is not yet integrated. A new product may notyet be modeled, or someone may simply neglect to book a trade in a timelymanner. A good and robust risk system will have processes and proceduresfor checking that all positions are captured and reporting those that are not.Nonetheless, there is always some possibility that positions are missed.

Likewise, the risk of positions that are included may not be properlyrepresented. A complex derivative security may not be modeled correctly.Some product may have an unexpected sensitivity that is not captured bythe risk system.

Missing positions and missing risks mean that the risk measuresreported will not perfectly represent the actual risk. In reality, nobodyshould be surprised that a reported risk number is not absolutely perfect.It is an estimate, and like any estimate, it will be subject to errors—one pos-sible error being that the positions or risks do not perfectly model the realworld. A risk system should be viewed as a tool for summarizing and aggre-gating a large amount of information in a concise manner. It will not beperfect, and users should recognize that in using the results.

170 QUANTITATIVE RISK MANAGEMENT

Page 190: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C06 01/31/2012 9:27:9 Page 171

R i sk Measures Such as VaR and Vo l a t i l i t yAre Backward Look i ng

Quantitative techniques can tell us things about how positions and a portfoliowould have behaved under past conditions—conditions that are ultimatelyderived from past experience. This is not a criticism, and contrary to whatsome commentators say, it is not a weakness of risk measurement techniques.It is simply the way the world is: We can seek to understand the past, but wecannot know the future. Understanding the past is terribly important becauseunderstanding current exposures and how they would have behaved in thepast is the first step toward managing the future. As George Santayana said,‘‘Those who cannot remember the past are condemned to repeat it.’’

The mistake here would be to think that these backward-lookingtools measure the future. A manager needs to use judgment to interpretbackward-looking information and incorporate it into the current deci-sions that will, together with randomness and luck, produce the future.Recognizing the backward-looking nature of the tools reminds us of thelimitations and argues for care in using tools such as VaR and volatility.

VaR Does No t Measure t he Wors t Case

Statistical measures such as volatility, VaR, expected shortfall, and othersprovide summary information about the dispersion of the P&L distributionand will never tell us the worst case. VaR is often talked about and thoughtabout as a statistically worst-case loss, but that is a misleading way to think.Whatever VaR level we choose, we can always do worse, and in fact, we areguaranteed to do worse at some point. Expected shortfall is useful relativeto VaR exactly because it incorporates information on the losses worse thanthe VaR level, but expected shortfall does not change the fact that it is sim-ply a summary statistic providing information about the distribution ratherthan about individual events that have not happened yet.

Litterman’s (1996, footnote 1) recommendation for how to think ofVaR is good: ‘‘Think of [VaR] not as a worst case, but rather as a regularlyoccurring event with which we should be comfortable’’ (p. 74). Thinking ofVaR as a worst case is both intellectually lazy and dangerous. It is intellectu-ally lazy because a worst case relieves one of the responsibility of thinkingof the consequences and responses to yet worse outcomes. It is dangerousbecause it is certain that results will, at some point, be worse.

VaR, volatility, and other risk measures should be viewed as a set of meas-uring tools that tell us about the likely level of losses (the ‘‘regularly occurringevent with which we should be comfortable’’). When viewed this way, theypush us toward thinking about what to do when something worse occurs, howmuchworse things could actually get andwhy, and how to reactwhen things do

Uses and Limitations of Quantitative Techniques 171

Page 191: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C06 01/31/2012 9:27:9 Page 172

get worse. Not only do they push us toward thinking about those possibilities,but they also provide quantitative information on how bad ‘‘worse’’ might be.

Quan t i t a t i v e Techn i ques Are Comp l ex and Requ i r eExper t i s e and Exper i ence t o Use Proper l y

On the one hand, quantitative techniques used in modern risk measurementare indeed complex. On the other hand, risk management experts, like otherexperts, seem to make everything complicated. A balance needs to bestruck. General managers and board members have a responsibility to un-derstand the complex businesses they oversee. The financial business over-all, not just risk measurement, is complex and is becoming more complexall the time. Managers at financial firms should take their responsibilitiesseriously and learn enough about the business, including risk measurement,that they can effectively use the available tools. In this day and age, lack oftechnical expertise cannot be an excuse for failing to use or understand riskmeasurement information.

Risk managers, however, have the corresponding responsibility toexplain their techniques and results to nonexperts in a simple, concise,transparent manner. Most of the ideas behind risk measurement are simple,even if the details necessary to get the results are complex. Simple ideas,clear presentation, and concise description must be the goals for anyoneengaged in measuring risk.

Quan t i t a t i v e R i s k Measures Do No t Proper l yRepresen t E x t reme Even t s

Quantitative risk measures do not catch extreme events. Experience doesnot. Imagination can try, but even that fails. Extreme events are extremeand hard to predict, and that is just the way life is. We need to recognizethis limitation, but it is hardly a failure of risk techniques. To criticize thefield of risk measurement because we cannot represent extreme events verywell is just silly, like criticizing the sky because it is blue. Anybody who doesnot like extreme events should not be in the financial markets. Luck, bothgood and bad, is part of the world. We can use quantitative tools to try toput some estimates around extreme events, but we have to learn to live withuncertainty, particularly when it comes to extreme events.

Failure to appreciate our limitations, however, is a serious mistake.Overconfidence in numbers and quantitative techniques and in our abilityto represent extreme events should be subject to severe criticism because itlulls us into a false sense of security. Understanding the limitations, how-ever, does not mean throwing out the tools that we have at our disposal forestimating extreme events, even if they have limitations.

172 QUANTITATIVE RISK MANAGEMENT

Page 192: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C07 02/14/2012 13:28:38 Page 173

PART Two

Measuring Risk

Page 193: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C07 02/14/2012 13:28:38 Page 174

Page 194: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C07 02/14/2012 13:28:38 Page 175

CHAPTER 7Introduction to Quantitative

Risk Measurement

The first section of this book focused on how to think about risk andrisk management. The second section of this book focuses on how to cal-

culate risk.The emphasis in Part One was on understanding risk and knowing the

tools and what they tell us. We ignored the technical details of volatilityand VaR because risk management is more than just quantitative measure-ment; risk management is managing people, projects, and institutions. Inthe end, risk management is not the numbers; risk management is what youdo with them.

But the numbers are important. In fact, they are critically important.Without numbers, there is little we can do; without measuring risk, we can-not manage risk. Kendall and Stuart are right when they say that it is not thefigures themselves but what we do with them that matters. But their maximapplies only after the numbers have been produced. Somehow, someway,we must produce the numbers that summarize, quantify, and measure therisk. It is that task, the task of measuring risk, to which we now turn.

The correct balance between hard numbers and soft management skillsis never easy to pull off. The mathematicians among us demand rigor, con-sistency, and complete (and complex) models that account for every last de-tail. The managers among us demand simple answers delivered yesterdaywith no budget for programming, data, or personnel.

Part Two focuses on the mathematical, the quantitative side of the bal-ance. The goal, however, is to present practical solutions backed by solidquantitative techniques. These chapters are meant to take the ideas dis-cussed in Chapter 5 and give the mathematics and theory behind them.These chapters should serve as a reference for the risk professional needingthe formulae for, say, the expected shortfall, assuming the P&L is distrib-uted as a mixture of normals.

175

Page 195: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C07 02/14/2012 13:28:38 Page 176

These chapters should also serve as a guide for the manager who hasless technical training but nonetheless needs background on the pros andcons of estimating volatility or VaR parametrically (delta-normal) versus byMonte Carlo. The goal in risk measurement and management is to marrythe technical expertise necessary for producing the numbers with the com-mon sense, judgment, and experience required to do something sensiblewith them.

I have tried to be rigorous but in the end risk measurement is an appliedfield and it is more important to get a good answer today than to wait forthe perfect answer next year. The objective is to have a good enough view ofthe theoretically perfect method to understand what shortcuts will work,when, and why. And what shortcuts will not work. Identifying the theoreti-cally perfect solution, however, is important because it provides the goaltoward which the organization should work. Building and implementing arisk system is never finished, and we always need to recognize where oursystems and procedures can be improved and then work on implementingthose improvements.

7 .1 PROJECT IMPL EMENTAT I ON

Risk measurement is an applied science and, as such, we need to take thetheoretical ideas and actually make them work. Make them work on com-puter systems, with complex and messy data, used by people with varyingdegrees of sophistication and knowledge.

Risk projects are as much about boring data and IT infrastructure asabout fancy quantitative techniques. In building or implementing a riskmanagement project, roughly 80 percent of the effort and investment is indata and IT infrastructure, and only 20 percent in sophisticated quantitativetechniques. I cannot overemphasize the importance of data and the IT infra-structure required to store and manipulate the data. The bottom line is thatif you don’t know what is in the portfolio, it is hard to do any sophisticatedanalysis on the portfolio. For market risk, but credit risk in particular, goodrecords of positions and counter parties is critical, and these data must be ina form that can be used.

Da t a

Data are always a big issue. Obtaining and using data is often more chal-lenging than anticipated. Good quality and timely data, however, form thebedrock of any risk project.

Data can roughly be separated into external and internal data. Externaldata are items such as history on market risk factors or security

176 QUANTITATIVE RISK MANAGEMENT

Page 196: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C07 02/14/2012 13:28:38 Page 177

characteristics. (Is that bond the trader bought this morning maturing onthe 15th or 31st of August?) We need to collect, clean, warehouse, update,and distribute these data.

Internal data are sometimes even harder than external data. One mightthink that collecting and using internal data would be easier because suchdata are under the control of the firm. But it never is easier. Positions and secu-rity details are squirreled away in obscure legacy systems that cannot talk toeach other. New products and securities invariably start out in a spreadsheet.This is reasonable for one, two, or three trades. But if a new business succeeds,it becomes 100, 200, 300 trades, and suddenly things start to creak and break.The risk due to the product is suddenly important, the data are difficult to getand often unreliable, and there is never the budget or staff to build a propervaluation system and data warehouse until something blows up.

I T Sys t ems

All the data need to be cleaned, stored, and manipulated. The ideas pre-sented in this book need to be translated into computer code. Much as wemight like, we cannot do all this on an HP 12-C and the back of an enve-lope. (Having said this, the ability to use those tools is critically importantin getting the complicated systems to work properly.)

The cost and effort spent on acquiring and maintaining the IT infra-structure should not be underestimated, but neither should it stand as a sig-nificant impediment to implementing a risk project. Building data and ITinfrastructure is not rocket science, and IT tools continue to improve. A sys-tem that would have taken many man-years to build a few short years agocan now be built in a short period with a small team. Nonetheless the pro-gramming and systems development requires good systems skills combinedwith a firm grasp of mathematics, statistics, and probability.

Da i l y Produc t i on

Numbers need to be produced daily and delivered accurately, reliably, ontime, to the right people. The daily production process needs to be managedand implemented appropriately. The skills for doing this are different fromthe skills required to build the systems. It requires attention to detail butalso the patience to manage the same process every day.

Summary

I want to highlight these issues of data, systems, and daily production but Ido not provide a template or guidebook for them. In the end, these issues

Introduction to Quantitative Risk Measurement 177

Page 197: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C07 02/14/2012 13:28:38 Page 178

are as critical, maybe even more critical, than the theoretical and technicalissues covered in the following chapters. A successful risk project dependson getting data, systems, and daily production right. These will often takethe majority of resources.

7 .2 TYPOLOGY OF F INANC IA LI NST I TUT I ON R I SKS

Before turning to the details of measuring risk, I want to provide a high-level view of the types of risk faced by a financial institution.

We have defined risk as: the possibility of P&L being different fromwhat is expected or anticipated; risk is uncertainty or randomness measuredby the distribution of future P&L. In this sense, there is no distinction be-tween, for example, market risk versus operational risk: Both encompassthe possibility of gains or losses different from what is expected. Nonethe-less, the sources, circumstances, and results of risk arising from differentparts of a financial business are so different that there is considerable benefitfrom distinguishing between different risks within a financial organization.1

I discuss five major categories of risk:

1. Market risk2. Credit risk3. Liquidity risk4. Operational risk5. Other (legal and regulatory, business, strategic, reputational)

These are discussed in somewhat greater detail further on and in laterchapters. I spend the most time on market and credit risk because these arethe most amenable to mathematical analysis and thus have been the moststudied. The areas of liquidity, operational, and other risks, however, shouldnot be downplayed simply because they are less amenable to analysis withsophisticated mathematical tools. Remember, as we saw in Chapter 4, thatmany of the worst financial disasters can be traced to operational issues.

Marke t R i s k

This is the first thing that comes to mind for financial institutions—pricerisk associated with market-traded securities, assets, and financial

1Crouhy, Galai, and Mark (2006, Appendix to Chapter 1) lay out a nice typology ofrisk, and Marrison (2002, 4 ff) outlines various risks faced by banks.

178 QUANTITATIVE RISK MANAGEMENT

Page 198: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C07 02/14/2012 13:28:38 Page 179

instruments. Financial institutions are in the business of trading or manag-ing financial assets, and market risk is the possibility that prices of tradedassets will differ from what is desired, expected, or planned.2 Marrison(2002, 4) uses losses in the stock market as the classic example of marketrisk, citing among other instances the fall in the Dow Jones index of 31 per-cent during one week in October 1987 and 23 percent on Black Monday,October 19.

Market risk may go under different names in differing circumstances.For example, in managing a portfolio, it may be tracking error relative to abenchmark. In a trading context, it may be basis risk, the relation betweenprices of two closely but not identical assets. For an options trade, it may bevolatility risk. These are all, however, risks associated with market prices,and differ only in the particular circumstances in which they arise.

Market risk can usefully be categorized as depending on particular fac-tors (see also Crouhy, Galai, and Mark 2006):

& Equity price riskAssociated with changes or variability in stock prices. This is often

split into general market risk (associated with the level of the overallmarket or a market index) and idiosyncratic risk that is specific to theparticular company.

& Interest rate riskRisk associated with interest rates and fixed income (fixed interest)

securities. This may appear alone, for example, with U.S. Treasurybonds, which are pure interest rate instruments, or combined with otherrisks, for example, a corporate bond that combines credit and interestrate risk. Interest rate risk will often be decomposed into risk from dif-ferent parts of the curve. Furthermore, differences between similar butnot identical instruments may be treated as basis risk or spread risk.Basis risk is not limited to interest rates but is common.

& Foreign exchange riskRisk in assets or instruments (including cash) denominated in dif-

ferent currencies, different from the home currency of the portfolio orinvestor.

& Commodity price riskRisk from changes in prices of commodities, whether traded in fi-

nancial markets or only in physical markets. Commodity prices are notconceptually different from other asset prices. But as Crouhy, Galai,

2 Although market risk is usually focused on traded securities or assets, it can alsoinclude risk for untraded or thinly traded securities when prices can be modeled orinferred.

Introduction to Quantitative Risk Measurement 179

Page 199: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C07 02/14/2012 13:28:38 Page 180

and Mark (2006) point out, the degree of variability can be differentbecause of particular considerations such as concentration of supply inthe hands of a few suppliers; the ease or cost of storage of the commod-ity itself; or perishability (for example, wheat) versus durability (forexample, gold).

& Credit riskRisk that change in the credit quality of the entity behind a particu-

lar instrument will affect the value of the instrument. A corporate bondis a prime example, an example for which the credit quality of the issu-ing company will determine the market demand and thus market priceof the bond itself. Credit risk is usually classified separately from mar-ket risk but the line between credit and market risk is increasinglyfuzzy. Many traded securities incorporate credit risk—the market priceof a corporate bond will change as the credit standing of the issuingcompany varies. Furthermore, with the rise of credit derivatives, manypreviously nontraded credit risks have become traded.

Cred i t R i s k

Credit risk is the risk that the value of a portfolio changes due tounexpected changes in the credit quality of issuers or trading part-ners. This subsumes both losses due to defaults and losses caused bychanges in credit quality, such as the downgrading of a counter-party in an internal or external rating system. (McNeil, Frey, andEmbrechts 2005, 327)

Credit risk ultimately arises from defaults—nonrepayment of promisedamounts. Credit risk is listed as a factor under the market risk section dis-cussed earlier but is also given its own classification here; the distinctionbetween market and nonmarket credit risk is blurry. One distinction mightbe that before default, it is usefully treated as market risk, while the actualdefault is considered credit risk. Another distinction might be that whenpriced and traded in the market (as in a corporate bond or in a credit defaultswap), it is market risk, while when it is not traded (as in trade settlement),it is nonmarket risk. Yet another is that changes related to a specific com-pany, such as downgrades and defaults, are credit risk while changes in gen-eral market sentiment, such as changes in an industry credit spread, aremarket risk.3 In the end, the distinction is difficult to make, and credit risk

3 See Marrison 2002, 226–227.

180 QUANTITATIVE RISK MANAGEMENT

Page 200: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C07 02/14/2012 13:28:38 Page 181

arises in so many and such varied forms that it is worth considering onits own.

Credit risk deserves its own section, even though the line betweenmarket risk and credit risk is blurry, because credit risk differs in someimportant respects from market risk. First, market risk focuses on internalentities within the financial organization, such as trading desks or port-folios. The measurement and management of market risk, for example,imposing limits, is done by trading desk or portfolio. Credit risk, in con-trast, focuses on external issuers or counterparties. Limits, for example, areimposed by counterparty. Second, the horizon for credit risk is generallylonger—market risk generally is short (days) while credit risk is longer (sayone year). This raises some different modeling issues that deserve attentionon their own. Finally, and most importantly, the modeling of credit risk isoften dramatically different from market risk—market risk relies onobserved market prices while credit risk must be constructed from theunderlying processes generating defaults and other credit losses.

Analysis of credit risk traces back to commercial banks and their port-folios of loans. It is easy to see that a major, maybe themajor, risk for a loanis the risk that the issuer will default, that is, credit risk. Credit risk, how-ever, extends much further than simply loans, and it permeates finance.Some of the ways credit risk appears are:

& Single-issuer credit risk, such as with loans and bonds. The default ofthe issuer means nonrepayment of the principal and promised intereston a bank loan or bond.

& Multiple-issuer credit risk such as with securitized mortgage bonds.Such bonds are issued by a bank or investment bank but the underlyingassets are a collection of loans or other obligations for a large numberof individuals or companies. Default of one or more of the underlyingloans creates credit losses.

& Counterparty risk resulting from contracts between parties, often over-the-counter (OTC) derivatives contracts. OTC transactions, such as inter-est rate swaps, are contracts between twoparties, and if one party defaults,it may substantially affect the payoff to the other party. Other contracts,such as letters of credit, insurance, and financial guarantees, will also entailcredit risk if there is potential for loss upon default of one party.

& Settlement risk. Associated with delivery and settlement of trades, thepossibility that one side fails to settle a trade after being paid.4

4 Also called Herstatt risk after Bankhaus Herstatt, a small German bank that in1974 failed in the middle of the New York Stock Exchange trading day, after receiv-ing payments for FX trades but before delivering payments to settle the trades.

Introduction to Quantitative Risk Measurement 181

Page 201: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C07 02/14/2012 13:28:38 Page 182

Credit risk measurement has grown in sophistication and importanceover recent years. The financial crisis that hit in 2008–2009 was driven bycredit issues, particularly those related to subprime mortgages in the UnitedStates. Furthermore, the past few years haves seen tremendous growth innew financial instruments specifically dependent on credit such as creditdefault swaps (CDS).

Credit risk will generally be asymmetric. It is asymmetric in two senses.First, the distribution of P&L for a credit portfolio will be strongly skewedor asymmetric, with a probability of large losses but not a similar pro-bability of large gains. Second, exposure to credit risk is generally presentonly when a position has positive value or is an asset. When a position ispositive and the counterparty defaults, the firm (bank) loses, up to the fullvalue of the position. When a position is negative and the counterpartydefaults, however, the bank cannot walk away from the obligation.

L i q u i d i t y R i s k

Liquidity risk is very important but one of the more difficult risks to con-ceptualize and measure. Liquidity risk actually comprises two distinctconcepts—asset liquidity and funding liquidity. These two can interact butit is necessary to keep them conceptually distinct, and it is unfortunate thatthey both go under the rubric of liquidity risk.

Funding liquidity risk, also called cash-flow risk, refers to the ability toraise or retain the debt for financing leveraged positions, meeting marginor collateral calls, or meeting fund redemptions. This issue is particularlycritical for leveraged portfolios using short-term debt (such as repurchaseagreements) that are subject to margin calls.

Asset liquidity risk refers to the ability to execute transactions in thenecessary size at the prevailing market price in a timely fashion. Assetliquidity will differ, sometimes dramatically, across instruments, marketconditions, and at different times. The markets for some assets, say, G-7government bonds or currencies, are so deep and developed that mosttrades can be executed with minimal impact on market prices. Other mar-kets, say, for an esoteric derivative or local currency emerging market bond,may be active during normal times for moderate-size trades, but effectivelyshut during market disruption.

Funding and asset liquidity risk can interact in a lethal combination.Adverse price movements, or even a turn in market sentiment, may inducemargin calls or cancellation of loans, putting pressure on funding liquidity.If the portfolio does not have sufficient cash or sources of new funding thiswill require the selling of assets. If the positions are large relative to normalmarket transactions or concentrated in illiquid securities, poor asset

182 QUANTITATIVE RISK MANAGEMENT

Page 202: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C07 02/14/2012 13:28:38 Page 183

liquidity may mean sales can be done only at very disadvantageous prices.The fall in prices may trigger further margin calls, then further asset sales,leading into a death spiral.

Jorion (2007, 333) summarizes it well:

Funding liquidity risk arises when financing cannot be maintainedowing to creditor or investor demands. The resulting need for cashmay require selling assets. Asset liquidity risk arises when a forcedliquidation of assets creates unfavorable price movements. Thus li-quidity considerations should be viewed in the context of both theassets and the liabilities of the financial institution.

. . .During episodes of systemic risk . . . liquidity evaporates. . . .

Liquidity risk probably is the weakest spot of market risk manage-ment systems.

Opera t i ona l R i sk

Operational risk is crucial but difficult to measure. Indeed, I argue thatgiven the current state of understanding, the focus should be as much onmanaging as on measuring operational risk. We may not be able to measureit very well, but it is so critical it cannot be ignored and must be managednonetheless.

Even the definition of operational risk is difficult and in flux. The gen-eral industry consensus (incorporating guidance from Basel regulators)defines ‘‘Operational risk [as] the risk of loss resulting from inadequate orfailed processes, people, and systems or from external events’’ (Jorion 2007,495). This is a balance between older narrow definitions (risk arising fromoperations or trade processing) and overly broad definitions that includeeverything not market or credit risk.

Quantitative measurement and statistical analysis of ‘‘inadequate orfailed processes, people, and systems’’ is difficult. Nonetheless, there can besubstantial returns to a disciplined approach, even if it is somewhat morequalitative than that applied to, say, market or credit risk. The Basel Com-mittee on Banking Supervision (BCBS) (2003) outlines a framework formeasuring operational risk that looks particularly useful.

Operational risk is so important because operational failures appearcentral to many financial disasters. Lleo (2008, quoting Jorion [2007]) sum-marizes the situation well: ‘‘Jorion (2007) drew the following key lessonfrom financial disasters: while a single source of risk may create large losses,it is not generally enough to result in an actual disaster. For such an event tooccur, several types of risks usually need to interact. Most importantly, the

Introduction to Quantitative Risk Measurement 183

Page 203: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C07 02/14/2012 13:28:39 Page 184

lack of appropriate [operational] controls appears as a determining contrib-utor: while inadequate controls do not trigger the actual financial loss, theyallow the organization to take more risk than necessary and also provideenough time for extreme losses to accumulate.’’

Much can be accomplished in controlling operational risks by improv-ing process and procedures to reduce frequency and severity of errors,reduce costs, and improve productivity. As Jorion (2007, 505) points out,‘‘The key to controlling operational risk lies in control systems and compe-tent managers. BCBS (2003) provides common-sense advice.’’ One aim is tomake processes such that it is easy for people to do the right thing and hardto do the wrong. Furthermore, improved process and procedures can bothcontrol operational risk and increase profits by reducing costs, for example,by making costs insensitive to trade volumes. This argues against McNeil,Frey, and Embrechts (2005, 464), who state, ‘‘An essential differencebetween operational risk, on the one hand, and market and credit risk, onthe other, is that operational risk has no upside for a bank.’’

O ther R i sks

I group other risks together. These other risks include such things as legaland regulatory risk, general business risk, strategic risk, and reputationalrisk. These are clearly important but I do not discuss them in detail.

7 . 3 CONCLUS ION

We now turn to examining risk measurement in detail. Chapter 8 focuses onthe tools that form the foundation of quantitative risk measurement: volatil-ity and VaR. Chapter 9 then applies these tools to a particularly simpleportfolio, the U.S. Treasury bond and CAC index futures introduced inChapter 1. The goal of Chapter 9 is to work through a simple examplein enough detail to make the ideas come alive. Although Chapters 8 and 9concentrate on market risk, nearly all the ideas, ideas about how to concep-tualize the P&L distribution and how to summarize and estimate the distri-bution, apply equally well to credit and other forms of risk.

Chapter 10 focuses on risk reporting and portfolio analysis tools. Thesetools help us to move from static monitoring of the risk (the strength ofvolatility and VaR) to active management of risk. Volatility and VaR helpus calibrate what the scale of potential losses is and they tell us about thespread in the P&L distribution. But managing risk requires understandingthe sources of risk and how changes in the portfolio are likely to alter our

184 QUANTITATIVE RISK MANAGEMENT

Page 204: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C07 02/14/2012 13:28:39 Page 185

exposure to losses. As such, Chapter 10 is probably the most important anduseful chapter in this book.

Chapter 11 turns to credit risk—risk ultimately resulting from the ac-tual or potential default (bankruptcy), or nonperformance of contracts.In many ways, the quantitative analysis of credit risk is no different frommarket risk—we care about the P&L distribution and can use the volatilityor VaR to summarize that distribution. Certain characteristics of credit risk,however, require that we treat credit risk on its own. First, obtaining theP&L distribution for credit risk often requires detailed credit modeling. Formarket risk, the market risk factors are generally known and observed, andour task is to translate or map the distribution of market risk factors to ourfirm’s particular holdings and securities. For credit risk, in contrast, thereare no data on past defaults of our particular loan or bond—if the loan hadalready defaulted, we would not own it. We have to build the distributionof profits and losses from scratch, based on often-complex models.

The second reason we want to treat credit risk separately is to empha-size the often-skewed nature of the P&L distribution. Market risk is mostoften relatively symmetric, looking something like the bell-shaped curve ofthe normal distribution. Credit risk is more often highly skewed with a longleft tail—a distribution that looks like Figure 8.3, with many large lossesand not many large gains. It is often claimed, incorrectly, I believe, that thisis due to the characteristic that credit losses often consist of many smallgains and a few large losses. In fact, the skewed shape of the credit P&Ldistribution is more often due to the tendency of credit losses to movetogether. When credits go bad, they go bad together. This may result from,say, all loans being sensitive to the overall economy and thus going badtogether when the economy goes into recession. Whatever the reason,however, this argues that for credit we need to pay particular attention tocorrelation and co-movement across credits. Unfortunately, co-movementacross credits is probably the hardest part of measuring credit risk.

Chapter 12 focuses on liquidity and operational risk. These areas arenot as developed, mathematically, as market and credit risk. There is, how-ever, considerable work being done in these areas.

Although this second section of the book focuses on quantitative tech-niques and tools, we need to remember that all the mathematics is focusedon actually managing risk. The focus must be on how these tools and tech-niques add to prudent business management. In this sense, quantitative riskmeasurement should be treated just like accounting or market research—anactivity and set of tools integral to managing the business.

Introduction to Quantitative Risk Measurement 185

Page 205: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C07 02/14/2012 13:28:39 Page 186

Page 206: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:35 Page 187

CHAPTER 8Risk and Summary Measures:

Volatility and VaR

As argued in earlier chapters, risk measurement is the measurement of theprofit and loss (P&L) distribution. This chapter introduces the standard

quantitative techniques used to analyze the P&L distribution. By standard,I mean those most widely discussed in the literature and applied in theindustry. In practice this means volatility and value at risk. Remember,however, that measuring risk is only the first step in managing risk, andthere is more to measuring risk than just VaR.

Value at risk (usually referred to as VaR) is the most widely used andquoted quantitative risk measure. Much of this chapter focuses on introduc-ing VaR and demystifying its uses and abuses. VaR, however, is only onemeasure among many that help to quantify and understand risk. Whatevertools we use, the important goal is to understand the distribution and poten-tial variability of the P&L.

There are many good texts that cover VaR and quantitative risk mea-surement. Starting with the most nontechnical and intuitive, there isCrouhy, Galai, and Mark (2006) Chapter 7, and Crouhy, Galai, and Mark(2001) Chapter 5. Marrison (2002) has a concise introduction at the end ofChapter 5 and then devotes Chapter 6 to methods for VaR estimation.Jorion (2007) is a broad and detailed reference with Chapter 5 covering thebasics, and additional topics such as the estimating and application of VaRcovered throughout the book. McNeil, Frey, and Embrechts (2005) is themost advanced and covers technical issues in great detail.

8 .1 R I SK AND SUMMARY MEASURES

Remember the definition: Risk is the possibility of P&L different fromexpected or anticipated: variability in outcomes, uncertainty, or randomness.

187

Page 207: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:35 Page 188

A distribution or density function is how we represent random possibilities,so the P&L distribution is the central focus for risk measurement.

Figure 8.1 shows the distribution (technically the density function1) ofP&L from a hypothetical bond strategy with many possible outcomes, somegiving high profits, some large losses. The horizontal axis shows the P&L,ranging from large losses to the left up to large profits to the right. The verti-cal axis shows the probability of any particular P&L. The pattern shown,with approximate symmetry of gains and losses and with higher probabilityin the middle, occurs frequently in the financial markets.

If we knew the full distribution of P&L, we would know most every-thing there is to know about the risk of the particular trade or portfolio.But it is rare that we will know or use the full distribution. We will usuallybe satisfied with some summary measure because the full distribution is toocomplicated to easily grasp or we simply want a convenient way to summa-rize the distribution.

Summary measures for distribution and density functions are com-mon in statistics. For any distribution, the first two features that are ofinterest are location on the one hand and scale or dispersion on the other.Location quantifies the central tendency or some typical value, while scaleor dispersion quantifies the spread of possible values around the centralvalue. For risk measurement, scale is generally more important than

$0 ProfitLoss

Hypothetical Yield Curve Strategy

FIGURE 8.1 Profit or Loss fromHypothetical Bond StrategyBased on Figure 5.1 from A PracticalGuide to Risk Management,# 2011by the Research Foundation of CFAInstitute.

1 The distribution function is F(Y) ¼ Prob[P&L � Y] and the density function is thederivative (where it exists): f(Y) ¼ Prob[P&L ¼ Y].

188 QUANTITATIVE RISK MANAGEMENT

Page 208: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:35 Page 189

location, primarily because the dispersion of P&L is large relative to thetypical value.2

The summary measures that we use are often called risk measures:Numbers that summarize important characteristics of the distribution. Wemust remember, however, that although summary measures are extraordi-narily useful, they are to some degree arbitrary, more useful in some circum-stances and less useful in others. Risk itself is not a precise concept anddepends on investor preferences; different investors may view the risk of thesame investment differently. Because the property we are trying to measure(risk) is somewhat vague, the summary measures themselves will, of neces-sity, also be somewhat arbitrary. The statistician Cramer’s remarks regard-ing location and scale measures are appropriate here: ‘‘Each measure hasadvantages and disadvantages of its own, and a measure which rendersexcellent service in one case may be more or less useless in another’’(Cram�er 1974, 181–182). Using these quantitative measures requires com-mon sense, experience, and judgment.

Vo l a t i l i t y and VaR

The most familiar measures of location and scale are the mean and standarddeviation (commonly called volatility and commonly denoted s). An exam-ple of a distribution and its mean and standard deviation are shown for ahypothetical yield curve strategy in Figure 8.2. Panel A shows a lower dis-persion distribution (less spread out), and Panel B shows a higher dispersiondistribution (more spread out). The mean is zero for both, but the standarddeviation is higher for the distribution in Panel B.

Volatility (or standard deviation) is the average of squared deviationsfrom the mean. Say we have the distribution or density shown in Figure 8.2.The value of the P&L is P—the profit or loss for the period (day, week,month). This is the horizontal axis, it is a random variable, and can takevalues ranging from losses (negative P) to profits (positive P). The curveshown in Figure 8.2 is the density, written as g(P)dp, and gives the probabil-ity that P will be between P and PþdP. The volatility is:

Volatility ¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

Z

P� �P� �2

g Pð ÞdPs

Mean ¼ �P ¼Z

Pg Pð ÞdP

2 For the S&P 500 index, the daily standard deviation is roughly 1.2 percent whilethe average daily return is only 0.03 percent, calculated from Ibbotson Associatesdata for 1926–2007, which show the annualized mean and standard deviation formonthly capital appreciation returns, which are 7.41 percent and 19.15 percent,respectively.

Risk and Summary Measures: Volatility and VaR 189

Page 209: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:35 Page 190

If instead of the distribution or density shown in Figure 8.2 we have aset of discrete data, with P&L observations Pi, then the volatility is:

Volatility ¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

1

n� 1

X

i¼1;n

Pi � �P� �2

s

Mean ¼ �P ¼ 1

n

X

i¼1;n

Pi

The volatility is effectively an average of deviations from the mean. Thegreater the dispersion around the mean, the larger the volatility will be.

Volatility is an ideal summary measure in many cases, particularlywhen the distribution is symmetric, the focus is mostly on the central partof the distribution, and the extremes in the tails are either well-behaved ornot of primary interest. When the distribution is nonsymmetric (skewed)

Location (mean = 0)Scale(standard deviation,

e.g., $130,800)

Location (mean = 0)$0Scale(standard deviation)

A. Low Dispersion (small standard deviation)

B. Higher Dispersion (larger standard deviation)

FIGURE 8.2 Location and Scale for P&L fromHypothetical Yield Curve Strategy—Mean andStandard DeviationReproduced from Figure 5.2 of A Practical Guide toRisk Management,# 2011 by the Research Founda-tion of CFA Institute.

190 QUANTITATIVE RISK MANAGEMENT

Page 210: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:36 Page 191

or if the focus is particularly on the tails of the distribution, volatility mayhave drawbacks. For example, given a distribution such as that shown inFigure 8.3, which is nonsymmetric and has a fat left tail, volatility (standarddeviation) may not provide a good representation of the left tail—that is,the risk of large losses.

The standard deviation (volatility) is one dispersion measure relevantfor risk measurement. The standard deviation is well known from statisticsand is widely used, but it is by no means the only summary measure wecould use. Value at risk, or VaR, is another popular summary measure.

VaR is a quantile of the distribution and is summarized graphically inFigure 8.4. A quantile is characterized by two numbers: first, a probabilitylevel Z, defined by the user, and second, a resulting level of profit or loss Y.The definition for VaRZ is: the P&L level Y such that there is a probabilityZ that the P&L will be worse than Y and a probability 1 – Z that it will bebetter than Y. The P&L is measured over some fixed time horizon, forexample, one day. In Figure 8.4, we can see that the VaR5% is the point onthe horizontal axis chosen so that the probability, the area under the curvebelow Y, is 5 percent. The idea behind VaR is simple: the level of loss isspecified in such a way that a worse loss happens with a predefinedprobability.3

FIGURE 8.3 Distribution whereVolatility May Be Less Useful SummaryMeasureReproduced from Figure 5.5 of APractical Guide to Risk Management,# 2011 by the Research Foundationof CFA Institute.

3 In the literature, the probability level chosen can be either the probability that losswill be worse than Y (my Z) or the probability that loss will be better than Y (my 1–Z). Jorion (2007), for example, uses 1 – Z. For clarity, I will generally quote both Zand 1 – Z as in ‘‘5% /95% VaR.’’

Risk and Summary Measures: Volatility and VaR 191

Page 211: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:36 Page 192

The idea behind VaR is quite simple: the level of loss such that worsehappens with a predefined probability. In Figure 8.4, we have chosen theprobability level Z to be 5 percent, so that we require the area in the left-hand tail of the distribution (the probability of a bad P&L) to be 5 percent.For this particular example, the P&L level Y that fits is –$215,000, andso the VaR5% ¼ –$215,000. The area at and to the left of –$215,000is 5 percent, so we have fixed the probability of losing $215,000 or worse at5 percent.

The definition of VaR can be expressed mathematically as:

Z%VaR ¼ Y s:t: P½P&L � Y� ¼ Z ð8:1Þ

$0YVaR = Y(e.g., $215,100)

Z = Area(e.g., 5%)

Z = Area(e.g., 5%)

$0YVaR = Y

A. Low Dispersion (small VaR)

B. Higher Dispersion (larger VaR)

FIGURE 8.4 Five Percent VaR for P&L fromHypothetical Yield Curve StrategyReproduced from Figure 5.3 of A Practical Guide toRisk Management,# 2011 by the Research Founda-tion of CFA Institute.

192 QUANTITATIVE RISK MANAGEMENT

Page 212: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:37 Page 193

This is simply the Z quantile of the P&L distribution.4

If we knew the true distribution of P&L we could simply calculate theVaR for any Z using equation (8.1). In practice, we do not know the truedistribution, but let us pretend for illustration that we do know the distribu-tion of P&L and that it is normal.5 In such a case, the calculation is easy:

Z ¼ P½P&L � Y� ¼ P½Standard Normal Variable � ðY � mÞ=s�

where m¼mean of the normal distributions¼ standard deviation (volatility) of the normal distribution.

Working with the distribution shown in panel A of Figure 8.2 and 8.4,the mean is zero (m ¼ 0) and the volatility is $130,800. For a normal distri-bution, P[Standard Normal � –1.64] ¼ 0.05 so that

�1:645 ¼ ðY � mÞ=s ¼ ð215;000� 0Þ=130;800

The volatility (standard deviation) and the VaR each summarize thedispersion of the distribution in their own way. For nice, symmetrical, well-behaved distributions, such as those shown in Figures 8.2 and 8.4, they canbe used almost interchangeably. In Figure 8.2, we could ask what is theprobability that the P&L is less than the standard deviation—what is the

4As I mentioned earlier, I will take Z to be the small probability that the loss will beworse than Y, so Zmight be 1 percent or 5 percent. If the P&L distribution functionis F(y), then the VaRZ or Z-quantile is F�1(Z), where F�1 is the inverse distributionfunction. When the distribution function is not continuous, there are technicalitiesthat I will ignore (see McNeil, Frey, and Embrechts 2005, 39). To add to the confu-sion, some texts change sign on the P&L distribution and discuss the upper tail ofthe distribution (defining VaRZ ¼ Y s.t. P[P&L � Y] ¼ Z; e.g., McNeil, Frey, andEmbrechts 2005) while others focus on the lower tail but change the sign on Y tomake the VaR a positive number (e.g., Jorion 2007).5 I intentionally use the word pretend rather than assume. Returns in financial mar-kets are usually independent over time and approximately normal. The approxima-tion is reasonably good for the central part of the distribution but less good for thetails of the distribution, where evidence shows that large profits and large losses oc-cur more often than is implied by a normal approximation. This is usually called theproblem of fat tails, meaning that observed returns or P&L have tails that are fatter(more probability mass in the tails relative to the middle) than the normal distribu-tion. The issue of fat tails is critically important in calculating VaR since VaR is ameasure of the tail of the distribution. For the moment, however, I want to illustratethe concept of VaR using a particularly simple distribution. I discuss issues sur-rounding fat tails more fully further on.

Risk and Summary Measures: Volatility and VaR 193

Page 213: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:37 Page 194

probability to the left of �1s? For a normal (Gaussian) distribution, theprobability will be 15.9 percent. In other words, we could think of the vola-tility as the 15.9%/84.1% VaR. Alternatively, we can note that for a nor-mal (Gaussian) distribution, the probability to the left of �1.64s is 5percent so that –1.64s is the 5%/95% VaR. For a normal distribution,volatility and VaR are direct transforms of each other and we can thusmove easily from volatility to VaR and back, and in this sense, they can beused interchangeably.6

It is important to remember that volatility and VaR are merely sum-mary measures and that each may be more or less useful, depending on cir-cumstances. There is nothing magical about either one, although we mightsometimes be tempted to think so. They merely summarize the distribution,albeit in somewhat different ways—either by looking at an average of devi-ations from the mean (volatility) or at a point on the tail of the distribution(VaR). Indeed, for well-behaved symmetrical distributions, they can be usedalmost interchangeably, and for a normal (Gaussian) distribution, we caneasily convert from one to the other.

Re l a t i o n be tween Vo l a t i l i t y and VaR

VaR and volatility are the two measures most often used to summarize riskand they do have different strengths and weaknesses for characterizing therisk of a portfolio. In some cases, however, they can be used interchange-ably, and it is important to understand the connection between them.In fact, in many practical applications, volatility is calculated first, and thena value for VaR is inferred from that volatility.

The Z% VaR is defined (see equation 8.1) as the P&L level Y thatsatisfies:

Y s:t: P½P&L � Y� ¼ Z

For VaR, one chooses Z and then calculates the implied value of Y (whichwill vary with the particular distribution of P&L). Common choices whenusing VaR are Z ¼ 5 percent and Z ¼ 1 percent, but any choice will work.

6A third summary measure, which I only mention here and is discussed more fullyshortly, is expected shortfall. For most cases, the expected shortfall is just the aver-age loss conditional on the loss being worse than the VaR: expected shortfall ¼E [LossjLoss < Y]. In Figure 8.3, VaR is the point Y, and the expected shortfall is theaverage of all losses Y and worse. In other words, the expected shortfall takes ac-count not just of the point Y but also of how much worse losses could be.

194 QUANTITATIVE RISK MANAGEMENT

Page 214: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:37 Page 195

For volatility, in contrast, the value of P&L Y is determined by the dis-tribution of P&L and the definition of the standard deviation:

Y ¼ volatility ¼ standard deviation ¼ s

For a specific distribution, however, there is a relationship between vol-atility and VaR: We can always calculate the value of Z corresponding to Y¼ s and consider volatility as one particular choice for probability level(Zs) and thus as simply one choice for VaR. The actual value of probabilityZs will depend on the particular distribution of P&L. For a normal distri-bution Zs ¼ 15.9 percent, and for many distributions one might use in fi-nance Zs is not too far from 15 percent. (For example, for the t-distributionwith n ¼ 6, discussed further on, Z ¼ 12.2 percent, while for a mixture ofnormals with 99 percent s and 1 percent 5�s, Z ¼ 13.5 percent). On thisbasis, one can often treat volatility simply as the VaR for a rather high valueof Z, generally on the order of 12 to 15 percent. And it is on this basis that,in what follows, I sometimes discuss VaR and volatility as if they wereinterchangeable.

When using VaR in actual applications, VaR is often derived from thevolatility. As just discussed, there is a relationship between volatility andVaR for each specific distribution. This means that knowing the volatility(and the distribution) we can calculate the VaR for any level of Z: Simplychoose Z and then calculate Y from equation (8.1). As a practical matter,this means that for a specific distribution, the VaR will be a multiple of thevolatility. For the normal distribution, these multiplicative factors for somepopular probabilities are shown in Table 8.1 (together with the dollar VaRfor the P&L distribution in Figures 8.2 and 8.4).

The relationship between volatility and VaR is useful when the P&Ldistribution is symmetric or close, but far less useful for a nonsymmetricdistribution, such as Figure 8.2.

TABLE 8.1 Various Combinations of Probability (Z) and P&L (Y) forNormal Distribution

Z Y (VaR) (Y –m)/sP[Standard NormalVariable � (Y –m)/s]

15.9% �130,800 �1.000 0.1595% �215,100 �1.645 0.0502.5% �256,300 �1.960 0.0251% �304,200 �2.326 0.0100.39% �348,000 �2.661 0.00390.1% �404,100 �3.090 0.001

Risk and Summary Measures: Volatility and VaR 195

Page 215: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:38 Page 196

Subadd i t i v i t y—Vo l a t i l i t y , VaR ,and Expec t ed Shor t f a l l

Although VaR is a popular summary risk measure, like any summary mea-sure it has strengths and weaknesses. One issue deserves particular mention.Diversification is a key concept in risk measurement and a risk measureshould reflect diversification: The risk of a portfolio should be the same orless than the risk of the subcomponents. In other words, if we have somemeasure for the risk of Portfolio A, call it Risk(Portfolio A), we would wantthat measure to satisfy:

RiskðPortfolio Aþ Portfolio BÞ � RiskðPortfolio AÞ þ RiskðPortfolio BÞ:

This is technically called subadditivity. Interestingly, VaR does not alwayssatisfy this condition.7

McNeill, Frey, and Embrechts (2005, 241 ff) provide a simple examplein which the VaR is not subadditive. Consider a set of corporate bonds thatmay default. Each bond costs $100 up front and in one year there will beone of two outcomes:

& No default, with profit $5 (the original $100 plus $5 interest). Theprobability is 98 percent.

& Default, with loss $100 (the bond is totally written off). The probabilityis 2 percent.

Portfolio A consists of purchasing $10,000 worth a single bond. Thisportfolio has 98 percent probability of a profit of $500, and a 2 percentprobability of a loss of $10,000:

P&L Portfolio A ¼ $500 P ¼ 0:98�$10; 000 P ¼ 0:02

Portfolio B consists of 100 separate bonds, each worth $100, with eachbond independent of the other bonds. This portfolio is also worth $10,000initially, but the profit and loss will be binomially distributed:

P&L Portfolio B ¼ $500� 105� Binomial½100; 0:02�

7 Technically, VaR is subadditive when the underlying risk factor distributions areelliptical and the portfolio can be represented as linear combinations of risk factors.The normal t-distribution and two-point mixture of normals are elliptical, so thisdoes cover many situations. Subadditivity comes under the wider concept of coher-ent risk measures, originally introduced by Artzner et al. (1997, 1999). See McNeil,Frey, and Embrechts (2005, section 6.1 and theorem 6.8).

196 QUANTITATIVE RISK MANAGEMENT

Page 216: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:38 Page 197

Portfolio B is clearly less risky than Portfolio A. Portfolio A is a singlebond with which we lose everything upon default. Portfolio B is diversifiedacross 100 independent bonds, and this diversification intuitively makesPortfolio B less risky.

The density functions for the two portfolios are shown in Figure 8.5.8

For Portfolio A, there are only two possible outcomes, �$10,000 andþ$500. (Note that the density for a single bond will look the same, but thetwo outcomes are �$100 and þ$5.) For Portfolio B there are a range ofoutcomes, centered around $290.

Figure 8.5 reinforces the idea that Portfolio B is less risky. The profit forPortfolio B is spread out but there is almost no chance of a really badoutcome. For Portfolio A, the probability of losing $10,000 is 2 percent,while for Portfolio B, there is no possibility of losing $10,000; the

FIGURE 8.5 Density Function for Profit and Lossfor Two Portfolios of Defaultable BondsNote: This is the density for the number of de-faults. Portfolio A is $10,000 worth of a singlebond with a probability of default of 0.02, a lossof $10,000 upon default, and a profit of $500with no default. Portfolio B is 100 bonds, each$100, with independent probability of default of0.02, loss of $100 upon default, and gain of $5with no default. The density for Portfolio B isbinomial.

8 I am working with the profit, and the figure shows the density of the profit.McNeil, Frey, and Embrechts work with the loss, the negative of the profit.

Risk and Summary Measures: Volatility and VaR 197

Page 217: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:38 Page 198

probability of all 100 bonds defaulting together would be:

P½100 bonds default in Portfolio B� ¼ P½Binomial½100; 0:02� ¼ 100�

¼ nk

� �

qk 1� qð Þn�k ¼ 100100

� �

:02100:980 ¼ 1:3� 10�170

Portfolio A is effectively a portfolio with 100 versions of the samebond, and Portfolio B is a portfolio with 100 bonds with identical charac-teristics but bonds that default independently, which thus provides diversi-fication. Portfolio B should be less risky. Subadditivity says that, at aminimum, the risk of a portfolio of multiple bonds should be no worsethan the sum of the risk of the bonds on their own. Thus the risk foreach portfolio (A and B) should be no worse than 100 times the risk of anindividual bond.

Let us now calculate the 5%/95% VaR for an individual bond and forthe portfolios. For a single bond, the $5 profit actually is the 5%/95% VaR.To see this, remember that the 5%/95% VaR is defined to be that profitsuch that there is a 5 percent probability of a worse profit. For a singlebond, there is a 2 percent probability of profit less than $4.99, and 100 per-cent probability of profit less than $5.01; the VaR has to be between $4.99and $5.01, and it is in fact $5. For Portfolio A, the same argument showsthat the VaR is $500.

For Portfolio B, the density is more spread out. The probability of hav-ing n or more defaults and the associated profit is shown in Table 8.2. Thistable shows the distribution function corresponding to the density functionin Figure 8.5. There is a 1.5 percent probability of six or more defaults and a

TABLE 8.2 Number of Defaults and Profit for Portfolio B of Defaultable Bonds

Number Defaults, n Profit ($) Prob. � n defaults

6 �130 0.0155 �25 0.0514 80 0.1413 185 0.323

Note: This is the number of defaults for Portfolio B, consisting of 100 bonds, eachwith an independent probability of default of 0.02, a loss of $100 upon default, anda gain of $5 with no default. The P&L distribution will be binomially distributed:$500 – 105 � Binomial[100,0.02]. The table shows the number of defaults, profit,and probability of observing that number of defaults (or more) for entries aroundthe 5%/95% VaR.

198 QUANTITATIVE RISK MANAGEMENT

Page 218: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:39 Page 199

5.1 percent probability of five or more defaults. This means the 5%/95%VaR is five defaults, or a loss of $25.

The VaR for Portfolio A is 100 times the VaR for a single bond ($500versus 100 � $5). The VaR for Portfolio B, however, is much worse: �$25($25 loss) instead of þ$500. Subadditivity would say the VaR of Portfolio Bshould not be worse than $500, but it is much worse at �$25.

We are now in the odd position of having Portfolio A with a VaR of$500 and Portfolio B with a VaR of �$25. This makes no sense becausePortfolio B is more diversified, is clearly less risky, and so should not havea worse VaR. The problem is the nonsubadditivity of the VaR: the VaRfor Portfolio B is worse (at �$25) than 100 times the individual bonds(100 � $5 or þ$500).

In contrast to VaR, volatility is subadditive. So also is another riskmeasure, called expected shortfall, or ES, which is closely related to VaR.Expected shortfall is (for most cases) the expected P&L given that the lossexceeds the VaR, also called the conditional VaR:

ESz ¼ E½P&LjP&L � VaRz�9

For comparison, we can calculate the VaR, the volatility, and the expectedshortfall for the two portfolios in our simple example, and for a single bond:

& Portfolio A: VaR $500, Volatility $1,470, Expected Shortfall �$3,700.& Portfolio B: VaR �$25, Volatility $147, Expected Shortfall �$68.50.& Single bond: VaR $5, Volatility $14.70, Expected Shortfall �$37.

We can see that VaR gives nonintuitive results. For Portfolio A, theVaR, the volatility, and the expected shortfall are all 100 times the valuesfor a single bond, and this makes sense. For Portfolio B, the volatility andexpected shortfall are both better than 100 times the single bond, and thusshow Portfolio B as less risky, reflecting diversification. This also makessense. Where things break down is the VaR for Portfolio B, which issubstantially worse than 100 times the VaR for the single bond. The pro-blem is that VaR is not subadditive. In this case, VaR is nonsubadditive

9 Expected shortfall is the conditional expectation only for a continuous distribu-tion—see McNeil, Frey, and Embrechts (2005, 44 ff). For a general P&L distribu-tion

ESz ¼ 1

z

Z z

�1F�1ðpÞdp ¼ 1

z

Z z

�1VaRpdp

Risk and Summary Measures: Volatility and VaR 199

Page 219: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:39 Page 200

because of the very skewed P&L distribution. The distribution for the diver-sified portfolio is substantially less skewed.

Essentially, ES takes into account losses that are worse than the value-at-risk level, while the VaR measures only the loss at the value-at-risk leveland does not depend on the distribution below that point. This means thatexpected shortfall will differentiate between two distributions with thesame VaR, but one of them has a much fatter tail, as shown in Figure 8.6.

T ime Sca l i n g or T ime Aggrega t i o n

So far, we have taken the P&L distribution for some fixed time period, sayone day, and volatility and VaR apply only for that time period. We oftenwant to measure the distribution over longer (or shorter) time periods, and

VaR = Y

Z = Area = 5%

Z = Area = 5%

VaR = Y

A. Distribution with Thin Tail, Small Expected Shortfall

B. Distribution with Fat Tail, Large Expected Shortfall

FIGURE 8.6 Two Distributions, One with Thin andOne with Fat Tails, Both Having the Same VaR butDifferent Expected ShortfallReproduced from Figure 5.6 of A Practical Guide toRisk Management,# 2011 by the Research Founda-tion of CFA Institute.

200 QUANTITATIVE RISK MANAGEMENT

Page 220: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:39 Page 201

it would be very useful if we had a simple rule for moving from shorter-period to longer-period distributions.

The question, then, is how does the one-day (shorter-period) distribu-tion translate or scale to a longer period? This is referred to as the problemof time scaling or time aggregation. Particularly, how do summary mea-sures such as volatility and VaR scale with time?

The question is simple to pose because the P&L for multiple days willsimply be the sum of P&L for individual days:

P&Lweek ¼ P&LM þ P&LTu þ P&LW þ P&LTh þ P&LF

When daily P&L is normally distributed, the sum will also be normal.This simplifies time aggregation, since the form of the distribution does notchange when moving to longer time periods. If we further assume that eachday has the same variance and that P&L is independent from one day to thenext, then the scaling rule is the well-known square root of time:

Volatility or VaR for h days ¼ Volatility or VaR for 1 day�ffiffiffi

hp

This results from knowing that for the sum of independent variables,the variance scales linearly:

Xi � N�

0;X

)X

h

i¼1

Xi � N�

0; hX

so that volatility scales as a square root.This simple rule is widely used and very useful, but the assumptions on

which it is based are not always appropriate. The assumption of indepen-dence is reasonable for financial time series.10 Removing the assumption ofnormality makes little difference, since distributions such as the t-distribu-tion and simple mixture of normals will scale in the same way.11 The

10But see Jorion (2007, section 4.5.2) for a discussion of scaling if serial correlationis present.11 For so-called elliptical distributions, which includes normal, Student t-distribu-tion, simple mixture of normals, and many others, the sum, or convolution, is alsoelliptical. (The random variables must be independent and have the same dispersionmatrix [variance].) See McNeil, Frey, and Embrechts (2005, 95). The new ellipticaldistribution may not be the same form as the original. The normal is the exception.As is well known, the sum of independent (marginally) distributed normals is alsonormal, and the sum of jointly normal variables (independent or dependent) will benormal.

Risk and Summary Measures: Volatility and VaR 201

Page 221: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:39 Page 202

assumption that variances are the same every day, however, is more likelyto be violated, and there does not seem to be any simple approach to relax-ing this assumption (see Jorion, 2007, 133).

The discussion so far has implicitly assumed that the time aggregation isover a relatively short period, such as a week or a month, when meangrowth and discounting are not important. Aggregation over long time peri-ods such as a year are discussed in a later chapter under the topic EconomicCapital, where the question of the level of equity capital and bankruptcy isconsidered.

8 .2 COMMENTS REGARD ING QUANT I TAT I V ER I SK MEASURES

I want to highlight some points that I think are not sufficiently elaborated instandard treatments of quantitative risk measures, VaR in particular, orthat are misunderstood by many users. I am not criticizing or rejectingVaR. There are many critiques of VaR, many which are not justified. Somecommentators have said that it is useless and even a fraud. In my experi-ence, views usually fall at one of two extremes:

1. Pro-VaR: It is the silver bullet that answers all risk-measurement questions.2. Anti-VaR: It is at best useless, more often outright misleading or worse.

As often happens, the truth is closer to a synthesis of the two views:VaR can provide useful information but has definite limitations. Whenproperly understood and appropriately applied VaR provides informationand insight, but when VaR is misapplied or misunderstood it can certainlybe misleading.

S t andard Trad i n g Cond i t i o ns versusEx t reme Even t s

There are two related but somewhat divergent uses of summary risk mea-sures such as VaR and volatility:

1. To standardize, aggregate, and analyze risk across disparate assets (orsecurities, trades, portfolios) under standard or usual trading conditions.

2. To measure tail risk or extreme events.

Risk measurement theory and texts usually focus on extreme events andVaR, but risk measurement in practice focuses as much on standard trading

202 QUANTITATIVE RISK MANAGEMENT

Page 222: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:39 Page 203

conditions and summary measures other than VaR. Paying heed to risk un-der standard trading conditions is important for two reasons. First, compar-ing and analyzing risk across disparate assets and complex portfoliosprovides information necessary for understanding and managing tradingresults under standard trading conditions (which is, by definition, most ofthe time), and can also provide valuable clues to performance under moreextreme conditions. Second, focus on VaR alone for measuring tail riskor extreme events can be as misleading as it is enlightening. Measuring tailevents is very difficult and delicate, and blind reliance on a single statistic isa mistake.

Used in the first sense (measuring risk under standard trading condi-tions), risk summary measures are an aid to understanding and comparingdifferent assets, trades, or portfolios: providing a user with important anduseful information on how much a trade might be expected to make or lose,even when the user is not intimately familiar with a particular security ormarket. The focus is on normal or usual trading conditions. In this context,volatility is used as often or more than VaR (a quantile of the distribution).This use of risk measures is relatively straightforward, robust, and generallynoncontroversial. I also think it is widely underappreciated as a tool in riskmeasurement.

The need to compare across disparate products was apparently the driv-ing force for the original development of VaR at JPMorgan—in response tochairman Dennis Weatherstone’s need to understand risk across the variousdivisions and products of the bank.12 Weatherstone came from the FX trad-ing desk and had a good intuitive grasp of risk, but needed some way toquickly and easily compare risks with which he was not so intimately famil-iar. VaR and volatility are good tools for such comparisons. They are alsogood as tools for aggregating disparate risks into an overall number. ButVaR or volatility is no substitute for a true understanding of risks. ConsiderJPMorgan again: ‘‘Weatherstone had been a trader himself; he understoodboth the limits and the value of VaR. It told him things he hadn’t knownbefore. He could use it to help him make judgments’’ (Nocera 2009). VaRis a valuable tool for comparing across products but no substitute for trueunderstanding and good judgment.

Used in the second sense of measuring tail or extreme events, VaR issometimes referred to as the ‘‘statistically worst-case loss.’’ This is a horri-bly misleading idea, since no matter how one chooses VaR, one can bevirtually assured that something worse will eventually happen. By theirnature, tail events are rare, and so measuring tail events is inherently

12According to ‘‘Risk Management’’ by Joe Nocera,New York Timesmagazine sec-tion, January 4, 2009.

Risk and Summary Measures: Volatility and VaR 203

Page 223: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:40 Page 204

difficult and open to large errors and uncertainty. As a result, when appliedin this second sense, VaR must be used carefully and any conclusionstreated with care.

These two uses of summary risk measures can never be precisely sepa-rated, but the conceptual differentiation clarifies some of their uses, bene-fits, and limitations. For usual or normal trading conditions, the statisticaland quantitative techniques are pretty standard, and the interpretation ofresults relatively straightforward. For example, using volatility and assum-ing normality or linearity of the portfolio may be satisfactory when consid-ering the central part of the distribution, meaning that simple andcomputationally efficient methods will often be appropriate. Measuring tailevents, in contrast, is delicate, and the appropriate statistical and quantita-tive techniques often difficult. Normality is generally not appropriate, re-quiring more complex statistical assumptions and more sophisticatedquantitative, numerical, and computational techniques. The inherent varia-bility of tail events is generally higher than for the central part of the distri-bution, and uncertainty due to estimation error and model error is larger.As a result, the estimation of VaR or other summary measures for tail eventsis inherently more difficult, and the use and interpretation of results is moreproblematic.

Misce l l a neous Commen t s Regard i n g VaR

The first point to highlight is that VaR is sometimes referred to as the‘‘worst-case loss’’ or ‘‘statistically worst-case loss,’’ and as mentioned be-fore, this is a horribly misleading idea. By definition, there is a probabilitythat losses will be worse than the VaR. Furthermore, no matter what lossone might choose as the ‘‘statistically worst-case loss,’’ one can be assuredthat sometime, somewhere, it will be worse.13 In reality, VaR is bestthought of as measuring outcomes that, while out of the ordinary, are stillreasonably likely and not worst-case possibilities. The most reasonablestatement I have seen comes from the excellent paper by Litterman (1996,

13 In reality, the ‘‘statistically worst-case loss’’ is the destruction of our world as weknow it. Possibly by a large asteroid, or nuclear cataclysm, or something else that istotally unforeseen. Unfortunately, the word worst is commonly applied to VaR.Crouhy, Galai, and Mark (2001, 187) wrote: ‘‘Value at risk can be defined as theworst loss that might be expected from holding a security or portfolio . . . given aspecified level of probability.’’ Jorion (2007, 106) wrote: ‘‘VAR is the worst lossover a target horizon such that there is a low, prespecified probability that the actualloss will be larger.’’ I am not criticizing these texts generally (they provide goodtreatments of the topic), only the misleading application of the word worst to VaR.

204 QUANTITATIVE RISK MANAGEMENT

Page 224: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:40 Page 205

footnote 1) ‘‘We think of this [VaR measured as one-day, once-per-year orZ ¼ 1/250] not as a ‘worst case,’ but rather as a regularly occurring eventwith which we should be comfortable.’’

In fact, using the word worst in relation to VaR shows a profoundmisunderstanding of probability. I would even go so far as to argue that‘‘statistically best-case loss’’ is a better term because the VaR is closer to thebest rather than the worst that we should see on bad days. To see why I saythis, let us look at an example. Consider the 1%/99% VaR for our $20 mil-lion bond holding. The VaR is roughly �$304,200. We should expect to seethis loss roughly 1 out of 100 days. But what will we actually see on theworst out of 100 days? What will the loss actually be on that day? We can-not say with certainty, but we can calculate the distribution; the appendix atthe end of this chapter gives the formulae (Appendix 8.1, ‘‘Distribution ofExtremes’’). If we are willing to assume that the P&L itself is normally dis-tributed, then the P&L on that 1 out of 100 days will be worse than�$304,200 with 67 percent probability and better than �$304,200 withonly 37 percent probability. In other words, �$304,200 is closer to the bestwe will see on that bad day than the worst. The VaR tells us a lot, but itnever tells us the worst we should expect.14

The second point is that VaR is one among various summary measuresof the distribution, and as with any summary measure, there may be muchhidden from sight. Examples of characteristics that a particular summarymeasure may not highlight would be the degree of asymmetry or fat tails inthe distribution, about which more is discussed further on. VaR is oneamong many measures of dispersion, sometimes useful, sometimes less so.Like any measure of dispersion, the choice of VaR is to some extent arbi-trary and it should be used and judged on its efficacy. There will be caseswhen a measure of dispersion other than VaR, or some other approach tosummarizing the distribution of P&L, proves more useful than slavishlyusing VaR.

Third, VaR is usually used as a measure of the tail of the distribution.As there is large variability and uncertainty in tail events, VaR, when usedto measure tail events, must be employed with special caution. As a corol-lary, estimating VaR can be challenging precisely because it is a tail mea-sure. The lower the probability of Z (for example, going from 5%/95%to 1%/99%), the more difficult estimating VaR is. Tail events are by theirnature rare and consequently hard to measure.

Fourth, any estimate of VaR is based on how the portfolio would havebehaved under certain historical conditions. Parametric VaR approximates

14 A graph of the P&L for the worst out of 100 days is shown in Appendix 8.1,Figure 8.13.

Risk and Summary Measures: Volatility and VaR 205

Page 225: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:40 Page 206

history by a parametric distribution; Historical VaR directly uses the histor-ically observed distribution for risk factors; Monte Carlo uses an assumeddistribution of risk factors, based in some way on history. These show howthe portfolio would have behaved in the past and may not predict how theportfolio will behave in the future. I do not mean this as a criticism.Although VaR is often criticized as backward-looking, doing so misses thepoint that understanding how the portfolio would have behaved under pastcircumstances provides valuable information and insight. Understandingthe past is the first step toward understanding what might happen in thefuture. Remember George Santayana’s words.

The final point to highlight is that putting a number on risk, with VaRor any other measure, does not reduce the underlying variability, only ourignorance about that variability. Nonetheless, by measuring and assigninga number, we often fall into an ‘‘illusion of certainty’’ (to borrow a termfrom Gigerenzer 2002, 38): a human tendency to believe in the certainty ofoutcomes and underestimate or misestimate the importance of chance.Assigning a number to VaR provides an example of which our intuitionmay not fully anticipate the degree of variability. With the 1%/99% VaR,we might expect to see 1 day out of 100 with a P&L worse than theVaR. But the world is chancy and we may or may not see exactly one dayand even if we do see one, that day may or may not be near the 1 percentVaR. If we examine 100 trading days, there is a good chance that we willsee no days worse than the VaR (37 percent chance) or two days worsethan the VaR (26 percent chance), and a decent chance we will see a P&Lsubstantially worse than the VaR. Just because we know the VaR does notmean the world will cooperate and deliver exactly that P&L. The world ischancy and we should never forget Benjamin Franklin’s maxim: ‘‘. . . [I]nthis world nothing is certain but death and taxes’’ (letter to Jean BaptisteLeroy 1789).

8 .3 METHODS FOR EST IMAT ING THEP&L D I STR I BUT I ON

So far we have been talking about the P&L distribution as if we alreadyknew the distribution, as if somebody gave it to us. This is clearly not thecase. We have to estimate the P&L distribution, and that is never easy. Theprocess is usually complex, dependent on the particular circumstances, andsubject to the details of the actual portfolio. The basic idea, however, is sim-ple: we want to trace out an estimate of the P&L distribution, or enough ofthe distribution to allow calculation of the appropriate summary measures,usually the volatility or the VaR.

206 QUANTITATIVE RISK MANAGEMENT

Page 226: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:40 Page 207

Our first inclination for estimating the portfolio P&L might be to sim-ply measure the price history of our past portfolio. If our portfolio is just asingle asset, say the 10-year U.S. Treasury bond introduced in Chapter 1,and we haven’t changed that portfolio for a long time, then maybe thiswould be feasible. Real-world portfolios, however, include multiple tradesand instruments that change over time. The history on the past portfoliousually will not represent how the current portfolio might behave. The be-havior and interaction of many instruments, many not in the portfolio in thepast, needs to be incorporated.

It is generally useful to think of the P&L as resulting from twocomponents:

1. External market risk factors.2. Positions—that is, the firm’s holdings and the security characteristics

that determine the sensitivity to risk factors.

The exposure to risk factors and the distribution of risk factors is combinedto obtain the distribution of P&L.

It is fruitful to split the P&L into market risk factors versus positionsand their sensitivity to risk factors for two primary reasons. First, multiplepositions will usually depend on a single risk factor (or a small group of riskfactors). This reduces the dimensionality of the problem. The P&L will de-pend on a relatively small number of risk factors rather than a large numberof positions.

FX forward contracts are a good example. A portfolio might contain a one-week, one-month, and two-month forward contract on USD-EUR (all havingaged from originally traded three-month contracts). These forward contractswill all depend, first and most importantly on the spot USD-EUR FX rate.15

15 For example, the USD PV (in $M) of a one-month forward contract to sell $100Mat an agreed 1.42 forward rate will be:

�100 � ½1=ð1þ ru � dcÞ � ðX=FÞ=ð1þ re � dcÞ�where F¼ pre-agreed forward rate (USD/EUR, in this case 1.42)

X¼ current spot rate (USD/EUR, such as 1.40)re, ru¼ one-month euro and U.S. rates (before day-count adjustment, such

as 0.43 percent and 0.26 percent)

Changes in the present value (PV) will be most strongly affected by the spot rate X,with changes pretty close to one for one with X. There will be a smaller contributionfrom the interest rates. (The PV in this case would be $–1.45M.)

Risk and Summary Measures: Volatility and VaR 207

Page 227: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:40 Page 208

The spot FX rate will be the risk factor that determines the prices and pricechanges of these three, plus all other USD-EUR contracts.16

The second reason it is fruitful to separate market risk factors versuspositions is that portfolio positions and consequent exposures can and oftendo change over time, sometimes dramatically. Market risk factors are exter-nal to the firm and conceptually different from portfolio positions. Positionsand consequent exposures are generally under the control of the firm andmay change frequently. Market risk factors are generally independent of afirm’s actions, and distributions of these factors generally do not changedramatically over short periods (such as days or weeks).17

There are three widely used methods for estimating volatility and VaR:parametric (also called linear, delta normal, or variance-covariance), histor-ical simulation, and Monte Carlo. The three methods differ in both assump-tions about and estimation of market risk factor distributions and in howexposures to risk factors are treated, but they share many common attrib-utes and limitations. Most importantly, they all share the conceptual dis-tinction between market risk factors and portfolio positions.

There is considerable debate on the pros and cons of alternate methods,but one should never forget that we can only estimate the P&L distribution;we will never have the true volatility, VaR, or whatever. The alternativemethods are merely alternative estimation strategies and should be judgedon their usefulness. In different circumstances and for different portfolios,one will be better than another, but there is no single right answer.

Ou t l i n e f or Genera t i n g P&L D i s t r i bu t i on

As pointed out earlier, estimating the P&L distribution and implementingquantitative risk measurement techniques is usually complex. Nonetheless,estimation usually conforms to the following four steps in one way or an-other. These steps are shown figuratively in Figure 8.7 (see also Jorion2007, 107):

1. Asset/Risk Factor Mapping—Calculate transformation from individualassets to risk factors

16An FX contract will in fact also depend on the interest rates or interest rate differ-ential for the two currencies, so that a full risk analysis would be done not with thespot FX rate alone but with the small set of factors: {spot rate, currency 1 yieldcurve, currency 2 yield curve}. The interest rate risk, however, is negligible relativeto the spot FX risk.17 I say ‘‘generally’’ because sometimes it seems that the market moves almost over-night from tranquil to panic mode.

208 QUANTITATIVE RISK MANAGEMENT

Page 228: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:41 Page 209

2 - R

isk

fact

or

dist

ribut

ions

3 - C

alcu

late

and

sum

RF

dist

ribut

ions

Sum

P&

L

Port

folio

P&

L di

strib

utio

n

4 - C

alcu

late

cha

ract

eris

tics

of th

e po

rtfo

lio P

&L

dist

ribut

ion

VaR

, Vol

atili

ty,

Expe

cted

Sho

rtfa

ll

Ass

ets

...... ......

1 - M

appi

ng

asse

t 2as

set 1

Map

ping

Fact

or 1

P&L

fact

or 1

Gen

erat

e P&

L

asse

t 4as

set 3

Map

ping

Fact

or 2

P&L

fact

or 2

asse

t n

Map

ping

Fact

or k

P&L

fact

or k

Gen

erat

e P&

LG

ener

ate

P&L

FIGU

RE8.7

OutlineofM

ethodologyforEstim

atingVaRorother

Characteristics

oftheP&LDistribution

209

Page 229: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:41 Page 210

2. Risk Factor Distributions—Estimate the range of possible levels andchanges in market risk factors

3. Generate P&L Distribution—Generate risk factor P&L and sum to pro-duce the portfolio P&L distribution

4. Calculate Risk Measures—Estimate the VaR, volatility, or other desiredcharacteristics of the P&L distribution

The following sections examines these steps in detail, contrasting the threeapproaches (parametric, historical, and Monte Carlo) where appropriate. Inthe next chapter, we develop a simple portfolio and estimate the P&L distri-bution by each of the three methods.

S tep 1—Asse t / R i s k Fac t or Mapp i ng

By asset/risk factor mapping, I mean the transformation from the actual po-sitions and securities in the portfolio to market risk factors. The mappingallows us to calculate P&L as a function of market moves. As pointed outearlier, risk factors are treated as the fundamental variables because multi-ple positions or securities usually depend on a single risk factor or smallgroup of risk factors.

Conceptually, the most straightforward mapping or transformation isto build a full valuation model of the instrument or security as a function ofthe market risk factors. This is sometimes complex but it does provide theframework with which we can understand the how and why of mapping. Italso, in a real sense, provides the gold standard against which we want tocompare alternative methods.

There are, of course, times when we cannot use the valuation modelapproach. The various methods for translating from assets to risk factors,ordered roughly from most accurate to least accurate are:

& Valuation Model—Using a pricing model to obtain asset price as afunction of the risk factor. Valuation model mapping may use eithersensitivities (deltas or linear approximation) or full revaluation. Sensi-tivities generally work well for assets for which P&L is close to linear,as for most bonds and equities, and is used for parametric estimation.Full revaluation is necessary for highly nonlinear products such as op-tions, and is often used for historical or Monte Carlo estimation.

& Statistical or empirical factor mapping—Using the empirical relationbetween an asset and an index, as in using the beta for an equity.

& Other mapping or binning—Map each actual asset to some set of stan-dardized assets.

& Proxy—Replace an asset with a proxy.

210 QUANTITATIVE RISK MANAGEMENT

Page 230: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:41 Page 211

Valuation Model We might represent the pricing model (assuming for sim-plicity only a single risk factor) as the pricing function f():

PVasset i ¼ f iðrf Þ assets i ¼ f1; . . . ; ng

Different assets that depend on the same risk factors may well have differentpricing functions. For example, both a bond and a bond option depend onyields and the forward curve, but the functional form of the pricing modelwill be quite different.

To understand the valuation model approach, let us revisit the exampleof USD-EUR FX forward contracts mentioned earlier. Say our portfolio isU.S. dollar-denominated, but we have a D100 million euro cash balanceand also one-month, two-month, and three-month forward contracts tosell $100 million forward (maybe put on to hedge the purchase of euro-denominated bonds in a USD-denominated portfolio). In this case, we willbe at risk to changes in the value of the euro cash balance and the presentvalue (PV) of the forward contracts. The market risk factor for all of thesewill be the spot USD-EUR FX rate, X.

The dollar value of the euro cash balance is:

100�X

The PV of forward contract i (PV in millions of dollars, for a contract thatsells for $100 million at an agreed rate of Fi) would be:

f ðrf Þ ¼ PVðX; rui; reiÞ¼ �100� ½1=ð1þ rui � dciÞ � ðX=FiÞ=ð1þ rei � dciÞ�

Fi ¼ pre-agreed forward rate for contract i ðUSD=EUR;

a value such as 1:42ÞX ¼ current spot rate ðUSD=EUR; such as 1:40Þ

rei; rui ¼ one-month euro and U:S: rates ðbefore day-count adjustment;

values such as 0:43 percent and 0:26 percentÞ

dci ¼ day-count fraction ðsuch as 30=360 for a contract 30 days outÞ

This formula tells us how the U.S. dollar market value will change as thespot FX rate, X, changes.

We have accomplished two things with this mapping or transforma-tion. First, we have reduced the cash balance and three contracts to a single

Risk and Summary Measures: Volatility and VaR 211

Page 231: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:41 Page 212

market risk factor, the spot FX rate X. (We are ignoring the small risk withrespect to the USD and EUR interest rates, but we will return to thisshortly.) Any additional forward USD-EUR contracts, of whatever matu-rity, will similarly map to spot FX risk. For large portfolios, this reductionto a smaller number of market risk factors is quite important.

Second, we have illuminated some of the dependency in risk acrossdifferent positions. The euro cash balance and the forward contracts havethe same risk, and the mapping to the market risk factor X makes thisexplicit.

This example also highlights another aspect of full valuation modeling.The FX forwards depend on the U.S. dollar and euro interest rates, the samerates that bonds, swaps, or interest rate futures will depend on. The valua-tion model method, when done properly, will ensure that similar risksacross diverse assets (in this case, interest rate risk for FX forwards and forbonds or swaps) will be properly, and automatically, captured.18

Another example of when valuation modeling works well is bonds orswaps priced off a yield curve. A yield curve would be built out of a rela-tively small number of bonds (for example 1-, 5-, 10-, and 30-year bonds).All similar fixed income instruments could be valued off the yield curve andwould depend on the 1-, 5-, 10-, and 30-year bond yields as market riskfactors. We could now use those yields as risk factors for all bonds, what-ever the bond maturity.19

For the U.S. 10-year Treasury bond introduced in Chapter 1, and whichwe revisit again in Chapter 9, such a yield curve model is particularly simpleand transparent. The 10-year bond depends only on the 10-year yield, andso the yield curve model effectively collapses to the standard yield-to-maturity formula. The 10-year yield is the market risk factor and the price-from-yield function is the valuation model.

Parametric estimation will use a linear approximation to the valuationmodel—sensitivities or deltas. The P&L is approximated by a Taylor series

18 In this case, the FX risk dominates. For a contract to sell $100 million 30 daysforward with F ¼ 1.42, X ¼ 1.40, ru ¼ 0.43 percent, and re ¼ 0.26 percent, the PVwill be $1.39 million. The sensitivity to the spot FX rate will be about $0.9857 mil-lion for every 1 percent weakening in the FX rate (going from 1.393 to 1.407). Thevolatility of the FX rate is about 0.79 percent per day, so the volatility of this posi-tion (due to changes in FX rates) will be about $780,000 per day. The sensitivity tothe U.S. interest rate ru will be about $833/bp, and the volatility of rates might be2.5 bp per day, so the interest rate volatility will be about $2,000 per day.19 See Coleman (1998a) and Coleman (2011a) for a discussion of building yieldcurves and calculating DV01s from yield curves.

212 QUANTITATIVE RISK MANAGEMENT

Page 232: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:42 Page 213

expansion; a linear function of risk factor changes:

PVasset i for rf þ h ¼ f i ðrf þ hÞ f iðrf Þ þ h Sensasset i

P&L h Sensasset i

Sensitivities are the derivative of the pricing function:

Sensasset i ¼ qf iðrf Þ=qrf assets i ¼ f1; . . . ; ng

This is just using the first derivative of the valuation function to approxi-mate changes in the P&L as a function of changes in risk factors. For ourFX forward example, the sensitivity with respect to the spot FX rate will be:

qPVðX; rui; reiÞ=qX ¼ 100� ð1=FiÞ=ð1þ rei � dciÞ�

It is convenient in this case to measure the sensitivity to a percent change inthe FX rate, which would give:

X� qPVðX; rui; reiÞ=qX ¼ 100� ðX=FiÞ=ð1þ rei � dciÞ�

For a contract to sell $100 million 30 days forward with F ¼ 1.42,X ¼ 1.40, ru ¼ 0.43 percent, and re ¼ 0.26 percent, the sensitivity will beabout $0.9857 million per 1 percentage point change in the FX rate (say,going from 1.393 to 1.407).

Historical and Monte Carlo estimation may use either linear (sensitivi-ties) or full valuation, but commonly uses full valuation. For full re-valuation, the function fi(rf) itself would be used and fi(rf) would bereevaluated for each required value of the risk factor rf.20

Examples of common assets, risk factors, and pricing models are shownin Table 8.3.

Using a pricing model as described here is in a sense the most accurateway of determining the P&L or sensitivity, since it uses a mark-to-marketmodel. It is also generally the most closely aligned with the valuation andanalytics done at the micro level by the units managing the risk.

20There is a subtle issue for full valuation when we look at changes in risk factors,particularly when we use the historical approach. We need to use the historical mar-ket risk factors but apply them as if they applied to the assets today. I discuss theissue more fully when we turn to the example in Chapter 9.

Risk and Summary Measures: Volatility and VaR 213

Page 233: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:42 Page 214

Statistical or Empirical Factor Mapping There are many situations in whichthere is not a formal pricing model, but the relationship between an assetand risk factor is well understood from empirical or statistical analysis. Theprimary example would be single stocks (equities). Market practice, sup-ported by considerable academic research (the capital asset pricing model,or CAPM), is to consider the percentage or logarithmic return on the equityto be made up of market and idiosyncratic components:

ri ¼ bi rm þ e

The market is usually taken to be a broad market index such as, for theUnited States, the S&P 500, the Russell 3000, or the Wilshire 5000. Theresidual is idiosyncratic, uncorrelated with the market and assumed uncor-related with other stocks. The P&L for a collection of stocks will then be:

X

iviri ¼

X

ivibi

rm þX

iviei

Common practice is to use rm as the risk factor, with sensitivity(P

ivibi). There is in fact a second factor, the idiosyncratic componentP

iviei. If one is willing to assume the idiosyncratic components are jointly

TABLE 8.3 Common Assets Transformed through a Pricing Model

Asset Risk Factor(s) Pricing Model

Government bond Spot rates or forward rates PV off forward curveInterest rateswaps

Spot rates or forward rates PV off forward curve

FX forwardcontract

Spot FX rate and Interestrate levels or differentials

PV of future currency amountstranslated to base currencyand discounted to today

Options Underlying asset (could beinterest rates, equities,commodities, FX rates)and interest rates fordiscounting

Option pricing model such asBlack-Scholes

Credit defaultswaps

Credit spreads or defaultrates, interest rates fordiscounting

Stochastic default process,uncertain cash flowsdiscounted back to today

Corporate bond Credit spreads or defaultrates, interest rates fordiscounting

Stochastic default process,uncertain cash flowsdiscounted back to today

214 QUANTITATIVE RISK MANAGEMENT

Page 234: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:42 Page 215

independent and normal, then this is the sum of independent normals, andone can either calculate its variance from the bi and the equities’ variances,or, for a large portfolio, assume it is negligible. Alternatively, one can esti-mate the variance from history, estimate its correlation with risk factorsother than rm, and use it as a risk factor on its own.

This example of equities assumes a single factor—rm. One can take theanalysis another step, along the lines of arbitrage pricing theory, and in-clude multiple market factors, either observable factors such as industry-specific return, or statistically derived factors derived from something likeprincipal components analysis. The return for asset i is then

ri ¼ b1i rf 1 þ b2

i rf 2 þ . . .þ e

The P&L for a collection of assets will be:

X

iviri ¼

X

ivib

1i

rf 1 þ�

X

ivib

2i

rf 2 þ . . .þX

iviei

Statistical factor mapping is most commonly applied to equities, butmay also be applied to credit spreads (where factors might include ratingand industry).

Other Mapping or Binning Many texts (for example, Marrison 2002, 131 ff;Jorion 2007, 283 ff; Mina and Xiao/RiskMetrics 2001, 43 ff) discuss cashflow mapping or binning for fixed-income instruments. The idea and termi-nology is mostly an artifact of the RiskMetrics methodology.

In RiskMetrics fixed-cash-flow (nonoption) instruments are priced bydiscounting the cash flows using a yield curve model built assuming linearlyinterpolated zero rates. For calculations, a limited set of yield curve points(for example, 1mth, 3mth, 6mth, 1yr, 2yr, 3yr, 4yr, 5yr, 7yr, 9yr, 10yr,15yr, 20yr, 30yr) are used as risk factors. As described so far, this falls un-der the ‘‘Pricing Model’’ approach discussed earlier, in that an instrument ismodeled as its constituent cash flows and each cash flow is priced by dis-counting off a yield curve.

For parametric VaR calculations in RiskMetrics, however, only pricingmodels for a limited set of cash flows vertices (those corresponding to theyield curve points) are actually used. As a result, pricing an arbitrary cashflow off the yield curve and calculating sensitivity to yield curve points isnot feasible. Instead, an arbitrary cash flow is mapped or binned to the twonearest neighbor vertices. Early RiskMetric implementations binned cashflows by requiring that the PV of the original and binned cash flows beequal, and that the volatility of the original cash flow be the linear interpo-lation of the nearest vertices. A more recent implementation bins by

Risk and Summary Measures: Volatility and VaR 215

Page 235: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:42 Page 216

requiring the PV of the original and binned cash flows be equal, and that thesensitivity to the nearest zero rates at the nearest vertices be equal.(To ensure equality of PV, a third cash flow with maturity zero, that is, cashthat has no sensitivity to interest rates, is also used.)

Binning as outlined here is an artifact of the RiskMetrics implementa-tion, but more generally, binning may be a useful shortcut to quickly evalu-ate a new or unusual instrument. In such a case, the new instrument isrepresented by a combination of existing instruments. Changes in the PVare most important, since the volatility or VaR (and the P&L distribution)are usually calculated as the change from current market value. Thus, theimportant criterion for representing a new instrument as a combination ofexisting instruments is to ensure that the sensitivity to risk factors is thesame or as close as possible. Matching PVs, as is done in RiskMetrics, seemsto be of decidedly secondary importance.

Proxy There are some cases in which a mapping from asset to risk factor or(more often) the risk factor itself is just not available. An emerging marketcountry making its first bond issue in the global markets would be a primeexample, as there would be no history on the distribution of the relevantrisk factor (bond yields for that country). In such a case, one must makeinformed guesses about the relationship between the asset and a risk factor,and what risk factor might provide a reasonable substitute or proxy for thetrue risk factor. Some proxy risk factor must be used.

S tep 2—Risk Fac t or D i s t r i bu t i o ns

We now turn from the portfolio positions (mapping assets to risk factors) tofocus on the market risk factors. The goal is to estimate the range of possi-ble levels and changes in market risk factors. We need to trace out the distri-bution of changes in market risk factors, basically to trace out the curvesuch as in Figure 8.1 at the beginning of this chapter.

It is at this point where the differences between estimation approaches(parametric, historical, and Monte Carlo) start to become more apparent.

Parametric (Delta Normal or Variance-Covariance) For parametric estima-tion, we assume that the distribution of market risk factors is jointlynormal. This considerably simplifies estimation of the market risk distribu-tions. For a single risk factor, the only parameters are the mean (often as-sumed zero for risk measurement purposes) and the volatility or standarddeviation. The shape of the distribution is the bell-shaped curve that wehave been considering in examples such as in Figure 8.1 or Figure 5.3. Theexact spread or dispersion is set by the volatility.

216 QUANTITATIVE RISK MANAGEMENT

Page 236: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:42 Page 217

When we consider multiple assets, then we introduce the covariances orthe correlations across assets, but again, this is a relatively easy parameteror set of parameters to estimate. At least easy relative to the task of estimat-ing the joint distribution if we do not assume normality.

Estimation of means, variances (volatilities), and covariances is one ofthe most-studied problems in statistics, so there is a huge literature onthe problem. In the simplest case, the volatility is estimated by the samplestandard deviation of historical changes:

Volatility ¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

1

n� 1

X

i¼1;n

Drf i � Drf2

s

;Mean ¼ Drf ¼ 1

n

X

i¼1;n

Drf i

Drf i ¼ change in risk factor for period i ðsay; from yesterday to today

or last week to this weekÞ

The covariances (between risk factors rf1 and rf2) are estimated by:

Covariance ¼ 1

n

X

i¼1;n

Drf 1;i � Drf 1

Drf 2;i � Drf 2

In other words, as long as we are willing to assume that changes in riskfactors are indeed normal, we can estimate the full distribution just by cal-culating a relatively small set of numbers. The downside, of course, is thatrisk factor changes may not, indeed usually are not, normal—the tails arefat relative to the normal. For studying the central part of the distribution,however, normality may be a reasonable approximation.

Historical The parametric approach just discussed estimates the parame-ters of the risk factor distributions, not individual risk factor measurements.(Given the assumed functional form for the distribution, we can generateobservations if need be.) The historical approach, in contrast, uses historicalobservations as finite-sample realizations of the risk factor distributions.The parametric approach assumes a particular distributional functionalform and so comes up with the complete distribution (conditional on theassumed functional form, of course). The historical approach uses historyas a set of observations from the distribution, generated by nature.

In summary, the idea behind the historical approach is very simple: usethe historical observations of risk factor changes as the distribution. Goback in history and simply use the actual observations. We see in the nextsection, where we discuss generating the P&L distribution, that the

Risk and Summary Measures: Volatility and VaR 217

Page 237: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:43 Page 218

historical approach is just a form of Monte Carlo estimation, with the dis-tribution of market risk factors being the empirical distribution rather thana fitted or chosen distribution.

Conceptually, the historical approach is very simple. There are, how-ever, some subtle points regarding revaluation using historical risk factorsthat we discuss shortly and again in Chapter 9. Furthermore, one point thatwe need to emphasize is that historical estimation as often practiced will notproduce highly reliable estimates. A typical application might use one ortwo years of historical data (on the order of 300 to 600 observations). Forany Monte Carlo application, this would be considered an unreasonablysmall number of draws from the joint distribution. Yet we do exactly thatwith historical estimation. In using the results of a historical estimationwith relatively few historical observations, we need to keep in mind that theestimates may have a high degree of uncertainty.

Monte Carlo Like the historical approach, the Monte Carlo approach usesa finite set of observations as a finite-sample realization of the risk factordistribution. The difference between the historical and the Monte Carlo ap-proach is in how the finite set of observations is generated. For the historicalapproach, we take a set of historical observations. For the Monte Carloapproach, synthetic or simulated observations are generated by somenumerical algorithm (pseudo-random number generator). The randomnumbers are generated according to some given distribution.

The termMonte Carlo refers to how the finite set of risk factor observa-tions is generated, not to how the underlying distribution is chosen. Theunderlying market risk factor distribution may be (and in fact often is) as-sumed normal. In that case, the estimation of the parameters of the distribu-tion is exactly the same as for the parametric approach. Whatever method isused for choosing the underlying joint distribution, the essence of the MonteCarlo approach is that a finite set of observations for the market risk factorsare generated by a computer algorithm.

Comment Regarding Daily Correlation—The Closing-Time Problem Any realis-tic portfolio contains multiple assets. The covariance, correlation, or co-movement between assets is a major, sometimes the major, determinant ofthe overall portfolio risk. We want to be careful estimating the correlations.

We will generally use closing prices, and changes from one close toanother, to measure risk factor changes. For daily changes, we calculate thechange from yesterday’s close to today’s close. For weekly changes, wecalculate the change from last week’s close to this week’s close.

When considering assets traded in different time zones, an importantissue arises—the closing times will be different. Closing prices for equities in

218 QUANTITATIVE RISK MANAGEMENT

Page 238: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:43 Page 219

London will be about 11 A.M. and for equities in New York will be about4 P.M., both measured by New York time. Any news or events that occur be-tween 11 A.M. and 4 P.M. will be reflected in today’s New York closing pricebut not London’s. Such events will show up in tomorrow’s London prices.

The noncontemporaneous closing times will induce spurious correla-tions between price changes measured in the two venues. The same-day cor-relation between London today and New York today will be reduced, andthe lagged correlation between London tomorrow and New York today willbe raised. Both effects will be spurious.

This effect can be large. Say that the true same-day correlation betweentwo equities, one traded in London and the other in New York, is 0.95, andthat the true lagged correlation is zero. In other words, the two equitiesmove together with a high degree of certainty but there is no serial correla-tion. Were we to measure this correlation using daily changes in closingprices, we would observe a same-day correlation of 0.75 and lagged correla-tion of 0.20. If the equities were traded in Japan and New York, the same-day correlation would fall to 0.44 and the lagged correlation would be 0.51.These are substantially misleading results.21

The effect will be the same whether we use the parametric or historicalapproach. For the parametric approach, we see the effect directly in the cal-culated risk factor correlations. For the historical approach, it is hidden inthe implied correlations across risk factors, but it is real nonetheless.

One simple expedient is to avoid using changes in daily closing pricesand use weekly changes instead, scaling down to daily later, if necessary.For weekly data, the closing-time effect is substantially reduced relative todaily data—for London and New York with a true correlation of 0.95, themeasured same-day correlation would be 0.92 (versus daily 0.75), and thelagged correlation would be 0.03 (versus daily 0.20).

S tep 3—Genera t e P&L D i s t r i bu t i on

This is where the differences between the three approaches come to the fore.Generating the P&L distribution with the parametric approach is fast andsimple, while the historical and Monte Carlo approaches often require sub-stantial computer resources. The payoff for the extra effort is the prospectthat the results will better reflect reality.

Like any modeling effort, however, there are two issues we must keep inmind. First, we need to evaluate the trade-off between the potential for morerealistic results versus the time, effort, and complexity required to produce

21This closing-time problem is discussed in Kahya (1998) and Coleman (2007), to-gether with some estimation strategies.

Risk and Summary Measures: Volatility and VaR 219

Page 239: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:43 Page 220

those results. Sometimes a simple but quick result is better than a sophisti-cated result that we don’t have. Second, we need to critically evaluate theinputs to our models. Just because a particular approach has the potential tobetter reflect reality does not mean it will. It depends on the quality of theinputs—junk input to a perfect model still produces junk output.

Parametric (Delta Normal) The parametric approach is also called the deltaor linear method, delta-VaR, delta-normal, or variance-covariance method.We use linear sensitivities (first derivatives).22

As an example, a bond position might be represented by sensitivitiesto points on the yield curve; $20M of the 10-year U.S. Treasury bondwe have been working with has a sensitivity of $18,300/bp to a fall in the10-year yield.

For the distribution of risk factor changes, we assume a specific para-metric functional form that depends on a small number of parameters.The normal distribution (multivariate joint normal distribution) is almostalways chosen as the distribution because of its mathematical tractability.

Individual risk factor P&Ls will be a linear function of changes in riskfactors:

P&Lðrisk factor 1Þ ¼ sensitivity1 � change in risk factor 1

When the risk factors are normally distributed, the risk factor P&Ls willalso be normal because a linear function of a normal variate is still normal.

The overall portfolio P&L is the sum of the individual risk factor P&Ls:

P&Lðrisk factor 1Þ ¼ sensitivity1 � change in risk factor 1

P&Lðrisk factor 2Þ ¼ sensitivity2 � change in risk factor 2

. . .

Total P&L ¼ P&Lðrisk factor 1Þ þ P&Lðrisk factor 2Þ þ . . .þ P&Lðrisk factor kÞ:

The individual risk factor P&Ls will be jointly normal. The portfolio P&L,the sum of the individual factor P&Ls, will also be normal because the sumof normal variables is itself normal.

22 Jorion (2007, 247 ff) calls this a local-valuation method, meaning that sensitivitiesare measured using local derivatives.

220 QUANTITATIVE RISK MANAGEMENT

Page 240: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:43 Page 221

The parameters of the joint normal distribution are the mean and thevariance-covariance matrix. The linearity of positions and joint normalityof the market risk factors combines to produce normality of the total P&Ldistribution. Normality of the total P&L means that the distribution is fullydescribed by the mean (often set to zero) and the variance-covariancematrix.

If we denote the risk factor sensitivities by the column vector:

D ¼d1...

dk

2

6

4

3

7

5

and the variance-covariance matrix by:

S ¼s11 s1k

}sk1 skk

2

4

3

5

then the portfolio variance can be easily calculated as:

Portfolio variance ¼ s2p ¼

X

ijdidjsij ¼ D0SD

Parametric VaR is easy computationally. The approach has advantagesand disadvantages (see also Marrison 2002, 104 ff):

Advantages:

& Fast relative to Monte Carlo or historical simulation.& Allows calculation of marginal VaR or contribution to risk as discussed

in Chapter 10.

Disadvantages:

& Does not handle risk from nonlinear positions (such as options) well.& The parametric assumption is often not appropriate (normality in partic-

ular), for example, in measuring tail events (although see the discussionfurther on under Alternative Distributional Assumptions).

Historical and Monte Carlo The parametric approach we just discussed esti-mated the parameters of the portfolio P&L distribution, not individualmeasurements or P&L observations. There was no necessity to estimate

Risk and Summary Measures: Volatility and VaR 221

Page 241: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:43 Page 222

individual points of the distribution because, when we assume the distribu-tion is normal, estimating the volatility (standard deviation, variance) tellsus everything. For a normal distribution with volatility sp (and mean eitherzero or estimated), we can calculate any probability statement we want:VaR for any probability level, expected shortfall, whatever we may require.

The historical and Monte Carlo approaches are quite different from theparametric approach: we start with a risk factor distribution represented bya finite set of observations. (To reiterate, the difference between historicaland Monte Carlo is in how the finite set of risk factor observations are gen-erated.) We will use the finite set of risk factor observations to generate afinite set of P&L observations. This finite set of P&L observations is now afinite-sample realization of the P&L distribution.

Our finite set of risk factor observations represent a set of days(or weeks, or months, whatever is our appropriate period). For historicalestimation, they really are days. For Monte Carlo, they are synthetic or sim-ulated days. Whatever the case, each day has a complete set of risk factorobservations with which we can calculate the P&L for every risk factor.

For each day, we calculate the P&L for each instrument, using themapping or binning from Step 1. We may use either sensitivities (linearapproximation) or full revaluation (using the full valuation model).If we use sensitivities, then the P&L for a particular day for risk factor1 would be:

P&Lðrisk factor 1Þ ¼ sensitivity1 � change in risk factor 1:

If we use full revaluation, then

P&Lðrisk factor 1Þ ¼ PVðbase value risk factor 1þ change in risk factor 1Þ� PVðbase value risk factor 1Þ

There is a slightly subtle point regarding the changes in risk factors.The ‘‘base value risk factor 1’’ is the value today. To this, we apply the‘‘change in risk factor 1.’’ For pure Monte Carlo, everything is prettystraightforward. For historical simulation, it can be confusing because wewant to apply the change that occurred in the past to today’s base value. Ifwe are doing our analysis as of January 27, 2009, one of our historicaldays might be the change from April 5–6, 2007. We generally do not wantto calculate

P&Lðrisk factor 1Þ ¼ PVðrisk factor 1 on April 6Þ� PVðrisk factor 1 on April 5Þ

222 QUANTITATIVE RISK MANAGEMENT

Page 242: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:44 Page 223

The PV as of April 5 might have been very different from the PV as of today.What we are trying to estimate is the effect of changes on today’s portfolio.We return to this issue in Chapter 9 when we go over a specific example.

Once we have the P&L for a particular day due to all risk factors andinstruments, we can add the separate P&Ls to obtain an estimate of theoverall portfolio P&L for that day.

We then repeat the whole process for each of our days and so end upwith a finite sample of synthetic observations for the portfolio P&L. Thisset of observations is our finite-sample estimate of the P&L distribution.

S t ep 4—Ca lcu l a t e R i s k Measures

Once we have our P&L distribution, we can use it to calculate things likethe volatility or VaR. Again, the approaches differ. With parametric estima-tion, we have a parametric distribution, while for historical and MonteCarlo, we have a finite sample.

Parametric (Delta Normal) The P&L distribution will be normal and wesimply use well-known results from statistics to make probability state-ments. We estimated the volatility in Step 3, so we know what that is.For example, if the volatility were $130,800 (as it is for the $20 millionU.S. Treasury bond introduced in Chapter 1), the value of the VaRfor various levels of probability Z would be as shown in Table 8.1 (repro-duced here).

We can do more with the P&L distribution, as is discussed in latersections.

Historical and Monte Carlo Here the P&L distribution will be the finite setof P&L observations. We can graph the observations as a histogram.

TABLE 8.1 (REPEATED) Various Combinations of Probability (Z) and P&L (Y) forNormal Distribution

Z Y (VaR) (Y –m)/sP[Standard Normal Variable

� (Y –m)/s]

15.9% �130,800 �1.000 0.1595% �215,100 �1.645 0.0502.5% �256,300 �1.960 0.0251% �304,200 �2.326 0.0100.1% �404,100 �3.090 0.001

Risk and Summary Measures: Volatility and VaR 223

Page 243: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:44 Page 224

We will more often want to calculate the volatility and the VaR. The vola-tility will be the usual formula:

Volatility ¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

1

n� 1

X

i¼1;n

Pi � �P� �2

s

Mean ¼ �P ¼ 1

n

X

i¼1;n

Pi

The VaR will be the empirical quantile. The P&L sample will be n ob-servations {xi} arranged in ascending order, {x1 � x2 � . . . � xn}. The Zpercent VaR is the Z percent quantile. If n � Z is not an integer, then thereis a unique quantile equal to the observed value xmþ1 where m ¼ integersmaller than n � Z. If n � Z is an integer, then the quantile is indeterminatebetween xnz and xnzþ1. For example, with Z ¼ 0.01 and n ¼ 101, n � Z ¼1.01 and the 1 percent quantile is the first observation. With n ¼ 100, n � Z¼ 1 and the 1 percent quantile is indeterminate between the first and secondobservations.

Summary

Different approaches have different advantages and disadvantages.Table 8.4 summarizes the pros and cons in tabular format.

TABLE 8.4 Comparison of Parametric, Historical Simulation, and MonteCarlo Approaches

Item ParametricHistoricalSimulation Monte Carlo

Market riskfactors

Parametricdistribution(almost alwaysnormal)

Estimate variance-covariance(volatility) usinghistorical data

Empirical(historical)distribution fromchosen pastperiod

Parametricdistribution(often but notnecessarilynormal)

Estimateparametersusually fromhistorical data

Generate MonteCarlorealizations ofmarket riskfactors

224 QUANTITATIVE RISK MANAGEMENT

Page 244: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:44 Page 225

Securitysensitivity/revaluation

Linear sensitivity(for normal riskfactors, simplymatrix multiplyvariance-covariance bydelta)

Usually fullrevaluation ofsecurities usinghistorical valuesof risk factors

Usually fullrevaluation ofsecurities usingsimulated valuesof risk factors

Speed ofcomputation

Good Fair Poor

Ability tocapturenonlinearity

Poor Good Good

Ability tocapture non-normality

Poor Good Fair

Pros Simple, quick,relativelytransparent

Captures non-normality ofhistorical riskfactordistribution

Capturesnonlinearity ofsecuritysensitivity

Capturesnonlinearity ofsecuritysensitivity well

Cons Normalityassumption formarket riskfactors

Linearity forsecuritysensitivity

These may not beappropriate forsome purposes

Computationalmore difficultthan parametric

Results may besensitive tohistorical periodin a lesstransparentmanner thanparametric orMonte Carlo

Potentially largersamplingvariability

Computationallydifficult

Usually does notcapture non-normality

Reproduced from Exhibit 5.1 of A Practical Guide to Risk Management,# 2011 bythe Research Foundation of CFA Institute.

TABLE 8.4 (Continued)

Item ParametricHistoricalSimulation Monte Carlo

Risk and Summary Measures: Volatility and VaR 225

Page 245: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:45 Page 226

8 .4 T ECHN IQUES AND TOOLS FOR TA I L EV ENTS

The most difficult and vexing problem in quantitative risk measurement istrying to quantify tail or extreme events.

Tail events are important because large losses are particularly signifi-cant and VaR is often used to quantify the likelihood of large losses. Theprobability level Z is chosen low, say 1 percent or 0.1 percent, to produce alow probability that losses will be worse than the VaR and a high probabil-ity that they will be better. Figure 8.8 shows how a low level for Z impliesthat the VaR measures the left-hand tail of the distribution. Using VaR inthis manner requires focusing on the tail of the distribution.

Jorion (2007) has a discussion of copulas in Section 8.3, alternate para-metric distributions in Section 4.2.6, and an introduction to extreme valuetheory in Section 5.4; Marrison (2002, 157 ff) discusses some alternativeapproaches for tail events; Beirlant, Schoutens, and Segers (2005) discussextreme value theory; McNeil (1999) has an introduction to extreme valuetheory in risk management, and Embrechts, Kl€uppelberg, and Mikosch(2003) wrote a text devoted to extreme value theory. McNeil, Frey, andEmbrechts (2005) provide excellent coverage of many of the quantitativetechniques, including alternative distributions, copulas, and extreme valuetheory.

Measuring tail events is difficult for two fundamental reasons. First, tailor extreme events are by their nature rare and thus difficult to measure. Bydefinition, we do not see many rare events, so it is difficult to make reliablemeasurements of them and to form judgments about them. Second, becauseof the scanty evidence, we are often pushed toward making mathematical orstatistical assumptions about the tails of distributions (extreme events), butsimple and common assumptions are often not appropriate. Most

VaR = Y

Z = Area(maybe 1%)

FIGURE 8.8 VaR for Low Probability Level ZReproduced from Figure 5.9 of A PracticalGuide to Risk Management,# 2011 by theResearch Foundation of CFA Institute.

226 QUANTITATIVE RISK MANAGEMENT

Page 246: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:46 Page 227

importantly, the assumption of normality is often not very good far out inthe tails. Although rare events are rare, they do occur, and measurementsacross different periods, markets, and securities show that in many casesextreme events occur more often than they would if the P&L behavedaccording to the normal distribution in the tails. This does not mean thenormal distribution is a bad choice when looking at the central part of thedistribution, but it does show that the normal distribution can be a poorapproximation when examining extreme events.

One example, among many, of the nonnormal nature of extremeevents is given in the beginning sections of Beirlant, Schoutens, and Segers(2005). They look at the number of large negative returns for the Dow JonesIndustrial Average for the period from 1954 to 2004. There were 10 logreturns of �5.82 percent or worse (out of 50 years, or roughly 12,500days). These events are shown in Table 8.5. Let us assume that the volatility(standard deviation) of log returns is 25 percent annualized, or 1.58 percentdaily.23 Using this estimate for volatility, we can calculate how many stan-dard deviations away from the mean each move is, which is also shown inTable 8.5 in the column ‘‘No. Sigma (Z-score).’’

With annualized volatility of 25 percent, a move of �5.82 percent is3.68s from the mean. According to Table 8.5, there were 10 down moves

TABLE 8.5 Ten Largest Down Moves of the Dow, 1954 to 2004

Date Close Log-Return No. Sigma (Z-Score)

October 19, 1987 1,738.74 �25.63% �16.22October 26, 1987 1,793.93 �8.38 �5.30October 27, 1997 7,161.15 �7.45 �4.72September 17, 2001 8,920.70 �7.40 �4.68October 13, 1989 2,569.26 �7.16 �4.53January 8, 1988 1,911.31 �7.10 �4.49September 26, 1955 455.56 �6.77 �4.28August 31, 1998 7,539.07 �6.58 �4.16May 28, 1962 576.93 �5.88 �3.72April 14, 2000 10,305.77 �5.82 �3.68

Reproduced from Figure 5.1 of A Practical Guide to Risk Management, # 2011 bythe Research Foundation of CFA Institute. Source: Based on Beirlant, Schoutens,and Segers (2005, Table 1).

23 Beirlant, Schoutens, and Segers (2005) show that daily volatility estimated over athree-year horizon is usually somewhat less than 25 percent, so 25 percent is a highbut not outlandish estimate.

Risk and Summary Measures: Volatility and VaR 227

Page 247: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:46 Page 228

of �3.68s or worse over the 50 years. Now we can ask how likely it wouldbe to observe 10 such down moves in 50 years of daily returns if the distri-bution were normal. Even with the high 25 percent estimate of volatility,the probability of a single observation from a normal distribution being�3.68s from the mean or worse is tiny—only 0.0117 percent. But we haveroughly 12,500 days, so the likelihood of observing one or several suchmoves in such a long period will be much higher. The probability of observ-ing one or more such moves out of 12,500 days would be roughly 77 per-cent; two such moves, 43 percent. But the probability of observing 10 ormore moves would be minuscule—0.0003 percent.24 We can continue—forexample, asking what is the probability of five or more moves worsethan �4.53s (assuming normality), which turns out to be less than0.000006 percent. In every case, the probability of observing what is shownin Table 8.5 is minuscule.

We should not get carried away, however. The probabilities implied bya normal distribution do indeed fall off far too quickly given the observa-tions in Table 8.5. The probability of observing 10 moves of �3.68s orworse is 0.0003 percent, which is miniscule. But the loss levels do not falloff so quickly. For a loss only 20 percent lower, �2.944s, we are almostsure to observe 10 moves, still assuming normality. The probability we willsee 10 or more moves of �2.944s or worse, assuming normality, is 99.6percent. This result seems extraordinary—that the probability of 10 movesof �2.944s or worse assuming normality is 0.996, whereas the probabilityof 10 moves of �3.68s or worse is 0.000003—but it happens to be true.Assuming normality, the probabilities fall off very quickly as the loss levelsget worse, but it does not take large changes in loss levels to cause very largefalloffs in probability.

The fact that moderate changes in loss levels lead to large changes inprobabilities (assuming normality) ties in nicely with Litterman’s rule ofthumb discussed in Chapter 5. That rule boils down to assuming actual losslevels are somewhat, but not hugely, larger than predicted by normality. Wesee again here that moderate changes in loss levels lead to large changes inprobabilities (assuming normality).

Figure 8.9 shows the problem with normal tails from a different per-spective. The line-connected squares show the expected frequency of eventsin the tail (excluding the 1987 crash for now) under the assumption of nor-mality and assuming 12,500 days in total. The dots show the empirical

24 This result goes back to the case of Bernoulli trials discussed in Chapter 2. Wehave 12,500 Bernoulli trials (days), with the probability of success (move worsethan �3.68s) being 0.0117 percent. The distribution of multiple successes will bebinomial, with the probabilities as quoted in the text.

228 QUANTITATIVE RISK MANAGEMENT

Page 248: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:47 Page 229

frequency. We can see that the normal distribution predicts far too fewextreme events.

Collecting 50 years of daily data may not be practical for most applica-tions, but Table 8.5 and Figure 8.9 do show us that if we want to considerextreme events, we need to address the issue of fat-tailed distributions. Treat-ing the tails as if they are normal leads to thinking such large moves are muchless likely than they actually are. (It also leads to guffaws when a trader says,‘‘We are having a 10 standard deviation move every day,’’ when what thetrader should really say is that ‘‘events are not behaving according to the nor-mal distribution, with more large moves than predicted by normality.’’)

The following sections discuss some alternative approaches that areused in practice and covered in the literature. First, I briefly discuss the valueof simple rules of thumb and review the rule of thumb introduced in Chap-ter 5. Second, I consider two simple alternative distributions. They are sim-ple but they model fat tails in an effective manner. Third, I briefly reviewextreme value theory, the asymptotic theory of maxima—the analogy for

Normal Empirical

Frequency

12

10

8

6

4

2

05.3–5.5– –5.0 –4.5 –4.0

Number of Standard Deviations from the Mean

FIGURE 8.9 Empirical and Normal Distribution for Tail ofDow Changes, 1954 to 2004Note: Frequency is the number of days with large downmoves (calculated or observed) out of 12,500 days. Theseobservations are from Table 8.5, derived from Beirlant,Schoutens, and Segers (2005, table 1), but excluding theOctober 1987 observations (which is far to the left at –16s).Reproduced from Figure 5.10 of A Practical Guide to RiskManagement,# 2011 by the Research Foundation ofCFA Institute.

Risk and Summary Measures: Volatility and VaR 229

Page 249: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:48 Page 230

tails of standard asymptotic theory and the central limit theorem for themean of a distribution. Finally, I review copulas, which provide a methodto describe dependence among multivariate random variables when the dis-tribution is other than normal.

Before turning to formal mathematical and statistical methods, I wantto reiterate the usefulness of simple rules of thumb. In Chapter 5, we dis-cussed Litterman’s (1996) maxim to ‘‘use as a rule of thumb the assumptionthat four-standard-deviation events in financial markets happen approxi-mately once per year’’ (p. 54). This comes down to assuming that once-per-year events are actually 4.0 standard deviations instead of 2.7 standard de-viations, as would be predicted by assuming normality.

We can think of Litterman’s rule of thumb as saying: ‘‘Assume thatonce-per-year extreme events are 1.5 times larger than normality would tellus.’’ This is simple, and its simplicity is appealing because it is easy to useand easy to explain. As we saw in Chapter 5, we can translate this intoprobabilistic terms. This rule of thumb turns out to be strong, probabilisti-cally speaking. We could also extend the rule of thumb to ‘‘assume once-per-decade extreme events are two times larger than normality would tellus.’’ Again, we will see that this is strong, probabilistically speaking, muchstronger than suggested by the Dow Jones observations discussed here.

A l t e rna t i v e D i s t r i b u t i o na l Assump t i o ns

I discuss here two simple alternative assumptions for the distribution ofP&L, both distributions having more weight in the tails (fat tails) comparedwith the normal distribution.25 These are simple models, but useful pre-cisely because they are simple but also capture the fat-tailed character ofobserved data. In later sections, we discuss more complex techniques.

The first distribution is the Student t-distribution. (See also Jorion 2007,87–88; Cram�er 1974, para 18.2.) The Student t-distribution has one param-eter, n or n, the shape parameter (or called degrees of freedom when used instatistical applications). The distribution is symmetric, mean zero, with var-iance n/(n – 2).26 For n > 2, the variance is finite, and for large n convergesto the standardized normal. Low values of n on the order of 3 to 6 appear tomatch reasonably well with the tails of financial data (Jorion 2007, 130). Astandard t-variate t times a constant ct will have variance ct

2n/(n – 2) or

25 See McNeil, Frey, and Embrechts (2005), Chapter 3, for a detailed technical dis-cussion of alternative distributions.26Confusingly some statistics texts such as Kmenta (1971, 143) use n for the numberof observations, and n ¼ n – 1 for the degrees of freedom or t-distribution shapeparameter, and give the variance as (n – 1)/(n – 3) ¼ n/(n – 2).

230 QUANTITATIVE RISK MANAGEMENT

Page 250: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:48 Page 231

standard deviation ctpn/(n – 2). The probability that such a random varia-

ble will be more than j standard deviations from the mean (will be less than–j times the standard deviation) is:

P½ctt < �jctpn=ðn� 2Þ� ¼ P½t < �j

pn=ðn� 2Þ�

The Student t-distribution is well known and relatively easy to workwith but has the disadvantage relative to the normal that the sum oft-distributed variables will generally not be t-distributed. The fact that nor-mal variables sum to a normal variable provides huge computational bene-fits as we saw in discussing the parametric approach to estimation earlier.When we assume the individual market risk factors are normal (and use lin-ear sensitivities or first derivatives for position exposures), then the overallportfolio P&L will also be normal. The computations required for calculat-ing the portfolio distribution are tremendously simplified when we assumenormality. This benefit disappears if we assume the individual market riskfactors are Student t-distributed.

The second distribution we consider is a two-point mixture of normals.Unlike the Student t-distribution this will share many of the computationalbenefits of the simple normal distribution. The mixture of normals has adistribution function that is the sum of two normal distribution functions:the first with probability (1 – a) and standard deviation smix and the secondwith probability a and standard deviation bsmix (both with mean m).27 Thevariance for such a random variable will be smix

2[(1 – a) þ ab2]. Parame-ters on the order of a¼ 0.0125 and b ¼ 2.5 give a distribution similar to thenormal in the central part of the distribution but fatter in the tails. Theprobability that such a random variable X will be more than j standarddeviations from the mean (that is, less than –j times the standard deviationsmix

p[(1 – a) þ ab2]) will be

P½X� m � �jsmixp½ð1� aÞ þ ab2��

¼ ð1� aÞ P½Standard Normal � �jp½ð1� aÞ þ ab2��

þ a P½Standard Normal � �jp½ð1� aÞ þ ab2�=b�

27This is a simple example of the more general class of normal mixture distributions.See section 3.2 of McNeil, Frey, and Embrechts (2005). Interestingly, the multi-variate t distribution is a normal variance mixture, and many other distributions ofinterest in finance can be generated as normal mean-variance mixtures.

Risk and Summary Measures: Volatility and VaR 231

Page 251: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:48 Page 232

The mixture of normals is appealing because it combines simplicitywith a straightforward motivation for fat tails. Say the P&L distribution foreach day is normal but periodically there is a day that is still normal butwith higher volatility. This is an attractive assumption since it is what trad-ing in the markets actually feels like—periods of relative quiescence inter-spersed with days of mayhem. The conditional distribution, conditionalon knowing whether the day is low or high volatility, will be normal. Incontrast, the unconditional distribution, not knowing whether a day is lowor high volatility, will be non normal and will exhibit fat tails. A two-pointmixture of normals provides a rough approximation to this.28

The biggest advantage in using the two-point mixture of normals is thatthe computational simplicity associated with the normal distribution can beapplied to the two normal distributions separately and the results combinedusing the two-point mixture. For example, the mixture of normals can beused with parametric VaR estimation to rectify the problem that the normaldistribution (usually assumed with parametric VaR) does not handle tailevents well.

With either of these two distributions, calculation of VaR is slightlymore difficult but the fundamental ideas remain the same. Both distribu-tions (for appropriate choice of parameters) have tails that are fatter thanthe normal distribution. If we had the observed volatility and knew thatthe P&L was normal, or t-distributed, or a mixture of normals, we couldcalculate the VaR for various probability levels using equation (8.1), repro-duced here:

Z%VaR ¼ Y s:t: P½P&L � Y� ¼ Z ð8:1Þ

Figure 8.10 shows the three distributions for our hypothetical bondtrade.29 Visually, the three densities do not appear very different. The over-all shape is the same for all three and it is only when examining the densitiesfar out in the tails that the differences become apparent.

28A rough approximation that is robust and simple can be quite valuable: rememberthat a 90 percent correct answer in the hand is more useful than a 99 percent correctanswer that is not available.29 The t-distribution has degrees-of-freedom n ¼ 9 and the mixture of normals has a¼ 1.25%, b ¼ 2.5. We discuss these parameter choices shortly when we turn to theDow Jones data for 1954 to 2004.

232 QUANTITATIVE RISK MANAGEMENT

Page 252: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:49 Page 233

The formulae for the VaR and expected shortfall (ES) for a normal,t-distribution, and two-point mixture of normals are:30

VaRZ ESZ

Normalt-distributionmixture

m þ sF�1(z) m þ s{f[F�1(z)]}/z

mþ s

ffiffiffiffiffiffiffiffiffiffiffi

n� 2

n

r

t�1n ðzÞ mþ s

ffiffiffiffiffiffiffiffiffiffiffi

n� 2

n

r

gn t�1n ðzÞ� �

z

nþ t�1n ðzÞ� �2

n� 1

!

m þ sy0 where y0 solves(1�a)F[y0

p[(1�a)þab2]]

þ aF[y0p[(1�a)þab2]/b]

¼ Z

mþ sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

1� aþ ab2p

ð1� aÞ y0 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

1� aþ ab2p

ab y0 ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

1� aþ ab2p

=b�

Z

2

6

6

6

6

6

4

3

7

7

7

7

7

5

Table 8.6 shows the VaR and expected shortfall for a standard normaland a t-variate (6 degrees of freedom, scaled so that volatility is 1.0).

Note that as the probability level Z gets smaller (for example, goingfrom 5%/95% VaR to 1%/99% VaR), the expected shortfall for the Stu-dent t and mixture of normal distributions get larger relative to the normal,a result of the fatter tails.We can use these alternative distributions to exam-ine the returns for the Dow Jones over the period from 1954 to 2004 dis-cussed earlier. Figure 8.11 expands from Figure 8.9, showing the expectedfrequency of events in the tail (excluding the 1987 crash, again) for a nor-mal, mixture of normals, and Student t-distribution (in all cases assuming astandard deviation of 25 percent) versus the empirical frequency. As we sawin Figure 8.9, the normal distribution gives far too few extreme events. Themixture of normals and the Student t-distribution, however, give a muchmore realistic representation of the actual data.

30 See McNeil, Frey, and Embrechts, (2005, 39–40, 45–46) for the normal andt-distribution. My formulae differ slightly from McNeil et al.: first the expressionffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiðn� 2Þ/np

appears because a standard t-variate with n degrees of freedom has vari-ance n/n – 2, in contrast to a standard normal which has variance 1. In my expres-sions, the term s is the volatility of the P&L distribution for both the normal and thet-distribution; the s in McNeil et al. eq. (2.20) is not the standard deviation of theP&L but the scale parameter. Second, my Z is the probability losses will be worsethan VaR (e.g., 1 percent or 5 percent STET) while McNeil et al.’s a is the probabil-ity losses will be better (e.g., 99 percent or 95 percent).

Risk and Summary Measures: Volatility and VaR 233

Page 253: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:49 Page 234

By using either the mixture of normals or Student t-distribution, we couldhave some confidence that we would not be horribly off in making statementsabout extreme events related to changes in the Dow Jones Industrial Average.

The Student t-distribution shown in Figure 8.11 is for degrees of free-dom n ¼ 9.31 The mixture of normals is for a ¼ 1.25 percent (probability of

FIGURE 8.10 Density Functions for Normal,t-distribution, and Mixture of NormalsNote: The t-distribution has degrees-of-free-dom n ¼ 9 and the mixture of normals has a ¼1.25%, b ¼ 2.5.

TABLE 8.6 VaR and Expected Shortfall for Normal, Student t, and Mixtureof Normals

5% 1% 0.39% 0.1%

Normal VaR �1.645 �2.326 �2.661 �3.090ES �2.063 �2.665 �2.970 �3.367Student t VaR �1.587 �2.566 �3.201 �4.252ES �2.213 �3.293 �4.017 �5.238Mixture VaR �1.506 �2.209 �2.727 �5.754ES �2.576 �4.104 �5.856 �9.771

Note: The t-distribution has degrees-of-freedom n ¼ 6 and the mixture of normalshas a ¼ 1%, b ¼ 5.

31 In this particular case, we use n (degrees of freedom) ¼ 9, but more generally,lower values of n on the order of 3 to 6 appear to match reasonably well with thetails of financial data (see Jorion 2007, 130).

234 QUANTITATIVE RISK MANAGEMENT

Page 254: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:49 Page 235

high-volatility regime) and b¼ 2.5 (ratio of high volatility to low volatility).We can use these distributions to examine some probability statementsabout the Dow Jones observations.

& What is the probability of an observation 4.53s from the mean? (Re-member from Table 8.5 that this occurred five times in 50 years, so isroughly a once-per-10-years event, or probability 0.039 percent.)

Normal Empirical

Frequency

16

10

14

12

8

6

4

2

05.3–5.5– –5.0 –4.5 –4.0

Number of Standard Deviations from the Mean

t-Distribution, Degrees of Freedom = 9.0

Mixture of Normals 1.25%, 2.5

FIGURE 8.11 Empirical and Selected Distributions for Tailof Changes in Dow Jones Index, 1954 to 2004Notes: Frequency is the number of days with large downmoves (calculated or observed) out of 12,500 days. Theempirical observations are from Table 8.5, derived fromBeirlant, Schoutens, and Segers (2005, table 1), but exclud-ing the October 1987 observation (which is far to the left at�16s). The ‘‘Normal’’ is a standard normal distribution(standard deviation 1.00). The ‘‘Mixture of Normals’’ is atwo-point mixture with a ¼ 1.25, b ¼ 2.5. (This means1.25 percent probability of a normal with standard devia-tion 2.5 � 0.9687 and 98.75 percent probability of a nor-mal with standard deviation 0.9687, giving a distributionwith overall standard deviation 1.00. The term 0.9687 iscalculated as 1/

p{0.9875 þ 0.0125 � 2.52}.) The t-distri-

bution is a standardized t-distribution with 9 degrees offreedom multiplied times 1/

p{9/(9 � 2)}, to give a distribu-

tion with standard deviation 1.00. Reproduced from Figure5.1 of A Practical Guide to Risk Management,# 2011 bythe Research Foundation of CFA Institute.

Risk and Summary Measures: Volatility and VaR 235

Page 255: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:50 Page 236

& Under normality, the probability is 0.0003 percent.& Under the t-distribution, it is 0.031 percent.& Under the mixture of normals, it is 0.039 percent.

In other words, normality predicts far too low a probability, the samething we see in Figures 8.9 and 8.11.

& What is the size of a move that, under a normal distribution, wouldproduce the same probability as the t-distribution (0.031 percent) orthe mixture of normals (0.039 percent)?& A normal move of 3.42s has probability 0.031 percent, the same as

the t-distribution move of 4.53s. In other words, the t-distributionpredicts a move 1.32-times larger than the normal distribution.

& A normal move of 3.36s has probability 0.039 percent, the same asthe mixture of normals of 4.53s, so the mixture predicts a move 1.35times larger than the normal distribution.

Compare this with the rules of thumb mentioned earlier. Litterman’srule is that once-per-year events are 1.5 times larger than predicted by nor-mality. My rule (extending Litterman’s) is that once-per-decade events are 2times larger than predicted by normality. The Dow Jones data for 1954 to2004 seem to indicate that these rules of thumb are quite strong. The DowJones observations show that once-per-decade moves are roughly 1.4 timeslarger than predicted by normality.

& What is the probability of having five moves of 4.53s or larger underthe normal distribution, the t-distribution, and the mixture of normals?& For normality, the probability of five such moves is less than

0.0001 percent.& For the t-distribution, the probability of five such moves is 35 percent.& For the mixture of normals, the probability of five such moves is

54 percent.

We still need to be careful, however. We have chosen the parameters (a¼1.25 and b ¼ 2.5 for the mixture of normals, degrees of freedom ¼ 9 for thet-distribution) based on a very small number of observations in the tails (essen-tially 10 out of 12,500). Furthermore, we have ignored the 1987 crash. Theprobability of observing one such large move (the 1987 crash was �16.22s)would be tiny, even assuming the t-distribution or mixture of normals.32

32We could, of course, extend the mixture of normals to a three-point mixture. Amixture with a1 ¼ 1.25%, b1 ¼ 2.5, a2 ¼ 0.02%, b2 ¼ 30 would fit these Dow datareasonably well. Although maybe too ad hoc in the present context, with regard toliquidity risk and behavior during a liquidity crisis, considered in Chapter 12, suchan approach may be useful.

236 QUANTITATIVE RISK MANAGEMENT

Page 256: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:50 Page 237

Nonetheless, simple rules of thumb and alternative distributions are incrediblyvaluable. They may not solve all our problems—it is a fool’s dream to thinkanymathematical model will perfectly represent our complex real world—butthey help us think more carefully and fruitfully about the issues.

E x t reme Va l ue Theory 3 3

Extreme value theory (EVT) is the name given to the study of the asymp-totics of tail events. The law of large numbers and the central limit theoremare familiar to everyone. They provide tools to study the sum and average ofa sequence of random variables as the number of observations gets large.Essentially, they say that the average will ultimately behave the same nomatter what the distribution of the original variables (within some limits),thus providing a simple characterization of the average as the number ofobservations grows. The beauty and power of these is that they hold true nomatter what the distribution of the underlying random numbers, as long asthe random variables do not have too much chance of being too large—forexample, if they have finite mean and variance.34

The formal study of the tails of distributions goes under the nameExtreme Value Theory, or EVT. The central limit theorem studies the aver-age of a sequence. EVT, in contrast, studies the maximum (or related char-acteristics) of a sequence—that is, the tails of a distribution. As such, EVTprovides tools and techniques particularly suited to analyzing VaR and tailevents. Broadly speaking, there are two approaches to studying the tails.The first is to consider the maximum of a sequence of random variables.This leads to the generalized extreme value (GEV) distribution. The secondis to consider the threshold exceedances, that is, all the observations thatexceed some specified high level. This leads to the generalized pareto distri-bution (GPD). The threshold exceedance approach follows from the

33Much of this discussion is based on McNeil, Frey, and Embrechts (2005, ch. 7)but also see Embrechts, Kl€uppelberg, and Mikosch (2003). Following McNeil, Frey,and Embrechts, this section discusses the upper tail of the distribution.34 The law of large numbers says that the average will converge to a constant, themean. The central limit theorem says the average, scaled up by the root of the num-ber of observations, converges to a random variable with normal distribution. For{Xn}, an independent sequence of random variables with the same distribution andE[Xn] ¼m and Var[Xn] ¼ s finite, then:

strong law of large numbers: n�1SXk !m with probability 1.central limit theorem: Sn ¼ X1 þ X2 þ . . . þ Xn, then (Sn – nm)/s

pn ¼ p

n(n�1SXk –m)/s)Normal(0,1)

See Billingsley (1979) Theorem 22.4 and Theorem 27.1. The conditions can beweakened from those stated here.

Risk and Summary Measures: Volatility and VaR 237

Page 257: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:50 Page 238

maximum GEV approach but the GPD distribution is generally consideredmore useful in practice because it makes more efficient use of the limiteddata on extreme events.

The beauty and power of EVT is that it provides a simple characteriza-tion of the tails, analogous to the central limit theorem for the mean, nomatter what the distribution of the original random variables (withinlimits).

For maxima of random variables Mn ¼ max(X1, . . . , Xn), suitablynormalized analogously to the central limit theorem normalization by themean and standard deviation, the only possible limiting distributions (apartfrom the degenerate case of a constant) are in the generalized extreme value(GEV) family (McNeil, Frey, and Embrechts, [2005], 265). The standard-ized GEV distribution is:

HjðxÞ ¼ expð�ð1þ jxÞ�1=jÞ j 6¼ 0

¼ expð�e�xÞ j ¼ 0

where 1þ jx > 0

Introducing location and scale parameters gives the general GEV:Hj,m,s(x) ¼ Hj[(x � m)/s]. The parameter j is known as the shape parame-ter and determines three types, also known by other names:

& j>0 – Fr�echet distribution (type II maximum extreme value distribution).& This is the most studied type of distribution in EVT because it has fat

tails and is thus of particular interest in finance.& Examples of distributions that lead to the Fr�echet limit are the

Fr�echet itself, inverse gamma, t-distribution, F (McNeil, Frey, andEmbrechts, (2005, 269)).

& The k-th moment will be finite only for k � 1/j (technically, for a non-negative random variable with distribution of the Fr�echet class, E(Xk)¼1 for k> 1/j, see McNeil, Frey, and Embrechts, (2005, 268)).& To make life confusing, the distribution can be parameterized dif-

ferently. Wolfram’s Mathematica parameterizes with ‘‘shape pa-rameter’’ a ¼ 1/j, scale b ¼ s, and ordinate y ¼ 1 þ jx:

CDF ¼ expð�ðy=bÞ�aÞ

& j¼ 0 –Gumbel distribution (type Imaximum extreme value distribution).& The tails decay faster than the Fr�echet type. Essentially, distributions

in this class have tails that decay exponentially.

238 QUANTITATIVE RISK MANAGEMENT

Page 258: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:51 Page 239

& Distributions of this type have finite moments (technically, for a non-negative random variable with distribution of the Gumbel class,E(Xk) <1 for all k > 0, McNeil, Frey, and Embrechts, (2005, 269)).

& Nonetheless, there is a great deal of variety in the tails of the distribu-tion. McNeil, Frey, and Embrechts, (2005, 269) point out that boththe normal, with thin tails, and the lognormal, with much fatter tails,belong to the Gumbel class and that empirically it would be difficultto distinguish the lognormal tail behavior from the Fr�echet type35

& Examples of Gumbel-class distributions are the normal, lognormal,hyperbolic, and generalized hyperbolic distributions, normal mixturemodels (but excluding the t-distribution, which is a boundary case),gamma, chi-squared, standard Weibull (confusingly, different fromthe Weibull case of the GEV).

FIGURE 8.12 Generalized ExtremeValue DistributionNote: The line is a Frechet (type II maximumextreme value distribution) with j ¼ þ0.5. Theshort dash or dot is a Gumbel distribution (type Imaximum extreme value distribution) withj ¼ 0.0. The long dash is a Weibull distributionwith j ¼ �0.5.

35 ‘‘In financial modeling, it is often erroneously assumed that the only interestingmodels for financial returns are the power-tailed distributions of the Fr�echet class[GEV with j > 0 where the tail decays slowly, like a power function]. The Gumbelclass [GEV with j ¼ 0, where the tails decay exponentially] is also interesting becauseit contains many distributions with much heavier tails than the normal, even if theseare not regularly varying power tails’’ (McNeil, Frey, and Embrechts [2005], 269).

Risk and Summary Measures: Volatility and VaR 239

Page 259: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:52 Page 240

& j<0 Weibull distribution.& Not important here because of the short tail and finite endpoint.

Estimates of j on the order of 0.2 to 0.4 are consistent with stockmarket data.

The GEV distribution provides a simple asymptotic characterization ofthe maxima for, in essence, any P&L distribution that one might use. This isvery powerful, but for actual use including only the maxima is wasteful ofdata. Something called threshold exceedances are a more practical solution.

Threshold exceedances are values that exceed a certain specified highlevel. They are extreme in the sense that they are the large observations in aset of observations. The key functions for discussing exceedances are theexcess distribution and the mean excess functions. The excess distributionfunction gives the probability conditional on the random variable exceedingsome specified level u.

Let X be the variable representing the random P&L. We focus on theexceedance X � u, the amount by which X exceeds the level u, and on thesize of the exceedance, y (which will be non-negative). The probability thatthe exceedance X � u is less than an amount y (conditional on X exceedingu) is the Excess Distribution:

Excess Distribution : FuðyÞ ¼ P½X� u � yjX > u�¼ ½Fðyþ uÞ � FðuÞ�=½1� FðuÞ�

The mean excess function is the mean conditional on exceeding thethreshold u (for random variables X with finite mean):

Mean Excess : eðuÞ ¼ E½X� ujX > u�

We saw that in the case of maxima effectively any distribution leads,asymptotically and for suitably scaled maxima, to the GEV distribution andthat the tail behavior can be characterized according to the shape parameterj. It turns out that the characterization according to j carries over to thresh-old exceedances: if a distribution function F converges to a GEV distributionwith parameter j, then as the threshold u rises, the excess distribution Fu con-verges to a generalized pareto distribution (GPD) with the same parameter j(see McNeil, Frey, and Embrechts, 2005, 277). The GPD is given by:

Gj;bðyÞ ¼ 1� ð1þ jy=bÞ�1=j j 6¼ 0¼ 1� expð�y=bÞ j ¼ 0

where b > 0 and y � 0 for j � 0 and 0 � y � �b/j for j < 0.

240 QUANTITATIVE RISK MANAGEMENT

Page 260: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:53 Page 241

The parameter j is again the shape parameter while b is a scaleparameter.

In practical applications, the GPD is useful for modeling the excess dis-tribution function and the mean excess function because both functions aresimple for the GPD. The excess distribution for GPD is also GPD:

Excess Distribution ðover uÞ forGj;bðyÞ ¼ Gj;bðuÞðyÞ bðuÞ ¼ bþ ju

The mean excess function for the GPD is:

eðuÞ ¼ bðuÞ=½1� j� ¼ ½bþ ju�=½1� j�

with 0 � u < 1 for 0 � j < 1 (remember that the mean is not finite for j �1) and 0 � u < �b/j for j < 0.

The GPD results can be used by assuming that if we choose a high butfinite threshold u, the observed excess distribution function will actually beGPD. We then fit the parameters j and b from the observed exceedances,and use the resulting fitted GPD to make statements about VaR or expectedshortfall. We know that asymptotically, the excess distribution function forvirtually any P&L distribution will converge to GPD. So by using the GPDfor a finite sample of observed P&L, we have some justification to think thatwe are using a functional form that is flexible enough to capture all types oftail behavior but also based on actual tail distributions.

Copu l as

Copulas provide tools to address dependence of random variables in a mul-tivariate context, specifically multivariate distributions that are not normal.Copulas do not tackle tail events directly but provide tools to move beyondthe limits of multivariate normality. Fat tails push one to consider distribu-tions other than the normal, and copulas are a tool for using non-normalmultivariate distributions. McNeill, Frey, Embrechts (2005) devote Chapter5 to copulas and provide a useful treatment of the subject. This section is nomore than an introduction.

There are a variety of distributions that might be used to model theP&L and risk factors, and which also have fatter tails than the normal. Thet-distribution is popular, but there are others.36 In the univariate case, when

36 See McNeil, Frey, and Embrechts (2005, ch. 3) for a discussion of alternative dis-tributions, both univariate and multivariate.

Risk and Summary Measures: Volatility and VaR 241

Page 261: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:54 Page 242

we are dealing with only a single variable, the mathematics and intuition aresomewhat more complex, but there are no substantial difficulties.

When we turn from a single variable to multiple variables, however, weneed to address the issue of co-movement and dependence across risk fac-tors. In the multivariate normal context, dependence is wholly incorporatedin the correlation or covariance matrix. Much of our intuition is based onmultivariate normality, but this intuition and the mathematics behind it donot carry over well to nonnormal distributions.

Measuring and modeling dependence turns out to be challenging whenthe distribution is not normal. Linear correlation, the tool most of us use tothink about dependence, is a deficient measure of dependence in the generalmultivariate context. McNeil, Frey, Embrechts (2005, ch. 5, particularlysection 5.2) provides a good review of the issues. It turns out that copulasprovide a useful approach and set of tools for modeling and describing de-pendence among multivariate random variables that extends well to non-normal distributions.

One way to think of copulas is as an alternative method for writinga multivariate distribution. Consider a multivariate random variable ofd dimensions, with distribution function F(x1, . . . , xd) and marginals{F1(x1), . . . , Fd(xd)}. It turns out that this multivariate distribution can bewritten in either of two forms:

& Usual multivariate distribution: F(x1, . . . , xd)& In terms of marginals and copula: C(F1(x1), . . . , Fd(xd))

There will always exist a function C(F1(x1), . . . , Fd(xd)), called a copula,which is itself a d-dimensional distribution function on [0, 1]d with stan-dard uniform marginals.37

What this says is that any multivariate distribution can be thought of aseither a multivariate distribution, or as a combination of marginal distribu-tions and a copula. The power of the latter approach is that it isolates thedependence across variables in the copula, with the marginals separate.(This is somewhat analogous to the linear [normal] case, in which the de-pendence structure can be isolated in the correlation matrix, with the vari-ances separate.)

The power of the copula approach is that it allows the specificationof marginals and dependence structure separately. By using copulas we

37 See McNeil, Frey, and Embrechts (2005, section 5.1). Copulas are most appropri-ate for continuous distributions.

242 QUANTITATIVE RISK MANAGEMENT

Page 262: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:54 Page 243

can focus on the dependence structure separately from the marginals. Amultivariate distribution can be created by mixing and matching marginalsand copulas.

The final issue I raise in this introduction is tail dependence. McNeil,Frey, Embrechts (2005, section 5.2) cover this in some detail, but the basicidea is to measure the dependence of two variables far out in the tails, forextreme observations of both variables. This is particularly important forrisk measurement because it is when variables move together simulta-neously that the largest losses occur. Furthermore, such simultaneous movesare not uncommon, as extreme values tend to cluster and assets tend to beeither volatile or quiescent together. In the simple bond and equity portfoliointroduced in Chapter 1, we are particularly concerned with the possibilitythat both the bond and the equity will have large moves, in the samedirection.

We saw earlier that the normal distribution, not having fat tails, does apoor job at representing financial market movements for large moves. Whatis perhaps more surprising is the behavior of the joint normal distributionfor large joint moves. If we go far enough out in the tails, two jointly normalvariables will eventually behave independently no matter what the correla-tion (as long as it is not �1). This is very troublesome because it means thateven if we could use marginal distributions that were fat-tailed, the depen-dence structure implied by joint normality provides a poor model for assetshaving large moves simultaneously.

With copulas we can, in fact, mix and match marginal distributions andcopulas, building a joint distribution that has a chosen set of marginals (say,t-distribution to model fat tails) matched with a copula that represents thedependence structure. Jorion (2007, 209) and McNeil, Frey, and Embrechts(2005, 195, 213) provide plots of simulations for so-called meta distribu-tions, mixing marginals and copulas. Jorion examines three possible choicesfor bivariate distributions:

1. Normal marginals and normal copula—produces the usual bivariatenormal distribution

2. Student tmarginals and normal copula—produces a hybrid distribution3. Student t marginals and Student t copula—produces the usual bivariate

Student t-distribution

In Chapter 9, we apply these three distributions plus two others toestimating our simple portfolio by Monte Carlo. We will be able in thatexercise to explicitly see the tail behavior of the normal distribution. Wewill see that for large joint moves, the bond and the equity start to behaveindependently.

Risk and Summary Measures: Volatility and VaR 243

Page 263: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:55 Page 244

8 .5 EST IMAT ING R I SK FACTOR D ISTR I BUT I ONS

The focus of this book is on how to think about risk and how to measurerisk—in other words, how to think about and measure the P&L distribu-tion. Most of our attention has been directed toward issues that are particu-lar to risk measurement, such as how to map from securities to risk factors,or the definition of VaR. To use the P&L distribution, however, we need toestimate it, and this means estimating market risk factor distributions(as discussed in Section 8.3, Step 2). This takes us into the field of statisticsand time-series econometrics. I do not want to cover econometrics in depth,as there are many good textbooks, but I will give a brief overview.38

We will focus on the parametric approach to estimating the P&L distri-bution, which means we assume that market risk factors are normally dis-tributed.39 The normal distribution is completely determined by thestandard deviation or variance-covariance matrix (and the mean). Thus, atits simplest, estimating the risk factor distribution means estimating thestandard deviation (volatility) from risk factor changes {Drf1, . . . , Drfn},using the standard formula (given further on).

So far, so simple. There are, however, a host of questions hidden in thissimple-sounding approach:

& What are the observations on risk factor changes {Drf1, . . . , Drfn}?Dollar change? Yield change? Percent or logarithmic change?

& How many observations do we use?& A similar but not identical question: What historical period should we use?& What frequency of data: hourly, daily, weekly, monthly?& What about changing volatility? It appears from casual observation

(buttressed by much research) that market volatility changes over time.How do we deal with changing volatility?

Before answering these questions, however, let us review some of thestylized facts about markets and risk factor distributions.

F i n anc i a l T ime Ser i es S t y l i z ed Fac t s

There are some observations about financial time series (market riskfactors) that are well enough established to earn the name stylized

38McNeil, Frey, and Embrechts (2005, ch. 4) is devoted to financial time series. Boxand Jenkins (1970) is the classic text on time series analysis, and Alexander (2001)wrote a more modern text devoted to financial time series.39 For nonnormality, the problems are more complex, but many of the same issuesarise.

244 QUANTITATIVE RISK MANAGEMENT

Page 264: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:55 Page 245

facts—observations that most everyone agrees on. (See, particularly,McNeil, Frey, and Embrechts 2005, 117 ff.)

Volatility Varies over Time and Extreme Values Cluster Varying volatilityand clustering means that if we have a large change in a risk factor (say,yields jump up today) we are more likely to see a large change tomorrow.Tomorrow’s change, however, may be up or down. A large increase indi-cates large changes are likely to follow, but don’t indicate which direction.Extreme values, both up and down, cluster together.

This clustering of large changes together and small changes together in-dicates that volatility is not constant, that it varies over time. Most impor-tantly, it indicates that volatility is persistent. If volatility is high one day, itwill tend to stay high over subsequent days, changing incrementally ratherthan all at once.

Another result is that if we look at squared changes (Drft2), we will see

serial correlation. A large squared change will tend to be followed by a largesquared change (there will be correlation between Drft

2 and Drftþ12).

Furthermore, we will be able to forecast these squared changes—when weobserve a large Drft

2, we can forecast that Drftþ12 will be above average.

This is all for a single risk factor series. For multiple series, we also havevolatility clustering, but here we have the added observation that clusteringtakes place in many risk factors simultaneously. Markets as a whole tend toget volatile or quiescent. When IBM has big moves and gets more volatile,so does General Motors; they are both volatile because the overall market isvolatile. A rough but useful way to think is that we are in a low or a highvolatility market or regime for all or many assets together. (This is the ideaof the simple mixture of normals as an approximation to a multivariate fat-tailed distribution.)

Changes Are Independent and Cannot Be Forecasted For a single series, whenwe look at changes rather than squared changes (Drft rather than Drft

2) wedo not see serial correlation. Furthermore, it is very difficult (practically im-possible) to predict tomorrow’s change from today’s change or past history.Strictly speaking, changes are not independent, they are only uncorrelatedand difficult to predict, but the term captures the idea.40

40 Independence requires that squared changes as well as changes are uncorrelated.Technically, independence requires that P[Drft & Drftþ1] ¼ P[Drft]�P[Drftþ1]—thatis, the probability of any statement about Drft and Drft jointly equals the product ofthe separate probabilities, and because of volatility clustering, this is not true. None-theless, it is true that for most practical purposes the change Drftþ1 is unrelated tothe change Drft.

Risk and Summary Measures: Volatility and VaR 245

Page 265: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:55 Page 246

Describing this in terms of the distribution will help clarify ideas.A large change today in the market risk factor is an indication that the dis-tribution has a large dispersion or high volatility (looks like Panel B ofFigure 8.2 rather than Panel A). Since volatility tends to vary over timeincrementally rather than jump all at once, a high volatility today indicatesthat tomorrow’s distribution will also have a high volatility. But today’slarge change doesn’t tell us what part of the distribution we will draw fromtomorrow; it doesn’t tell us anything about tomorrow’s particular value.We can use today’s large change to predict that tomorrow will be a largechange (so changes are not probabilistically independent) but we cannotuse today’s change to predict the direction of tomorrow’s actual change(and in this sense changes are unrelated over time).

When we turn to multiple series, we do not see serial cross-correlation;that is, we do not see correlation between between Drfat and Drfbtþ1).

41

We will see correlation between series for the same period, in other words,between Drfat and Drfbt. This correlation will obviously be differentacross different series. For example, the correlation between the 5-year and10-year U.S. Treasury bonds will be high but the correlation between the10-year U.S. bond and IBM stock will be much less.

Correlations Change over Time When we look at correlations across serieswe see that correlations change over time. Unlike the stylized facts we havediscussed so far, however, this is more difficult to verify and measure. Ourfirst instinct might be to simply calculate correlations over some nonover-lapping periods, say monthly periods, and compare the resulting correla-tions. This turns out not to tell us much, however, because when we useshort periods, we should expect to see substantial variation in measuredcorrelation, simply because of random variation. This is another exampleof how we need to carefully think about the random nature of the world.

Let us take a short digression and look at a particular example. Thecorrelation between daily changes in yields of the 10-year U.S. Treasuryand percent changes in the IBM stock price is 0.457 for the period January2008 through January 2009. Now let us measure correlations over 12roughly one-month periods (21 trading days). The 12 correlations rangefrom 0.05 to 0.69. This looks like good evidence for changing correlations,with a large range. But we have to ask, what would we expect purely fromrandom variation? It turns out that we would expect this amount of varia-tion purely from random sampling variability.

41This assumes we do not have the closing-time problem mentioned under Section8.3, discussed earlier.

246 QUANTITATIVE RISK MANAGEMENT

Page 266: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:55 Page 247

The distribution of the correlation coefficient is complicated andskewed, but it is well-known (Hald 1952, 608 ff) that applying the Fishertransformation:

z ¼ 1

2ln1þ r

1� r

produces the variable z that is approximately normal with

Mean ¼ m 1

2ln

1þ r

1� rþ 2

r

2ðn� 1Þ

Variance 1

n� 3

In other words, the variable

u ¼ ðz� mÞffiffiffiffiffiffiffiffiffiffiffiffi

n� 3p

will be, to a good approximation, normally distributed with mean 0 andvariance 1.

For our example of correlation between the U.S. Treasury and IBM, letus assume that the true correlation is

r ¼ 0:457 ) z ¼ 0:4935:

For our monthly samples, n ¼ 21, so that

m 0:5049

and the upper and lower 4.2 percent bands are

z lower 4:2% band ¼ 0:0971 upper 4:2% band ¼ 0:913

r lower 4:2% band ¼ 0:097 upper 4:2% band ¼ 0:722

Why pick 4.2 percent bands? Because there will be 8.4 percent orroughly a one-twelfth probability outside these bands. For 12 monthly cor-relations, we would expect roughly 11 out of 12 to be inside these bandsand one outside these bands. In fact, we see exactly that: 11 of ourcalculated correlations are within the bands and only the smallest, 0.05, isoutside the bands and then only by a small amount.

Risk and Summary Measures: Volatility and VaR 247

Page 267: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:55 Page 248

The conclusion we should take from this simple exercise is that12 monthly correlations that range from 0.05 to 0.69 is exactly what wewould expect to observe. Such an observation provides no strong evidencethat the correlation between 10-year U.S. Treasury yields and the IBM stockprice is different from 0.457 or that it changes over time.

This is just another example of how we need to think about randomnessand uncertainty. Although correlations no doubt do change, we have toturn a skeptical eye toward evidence in this arena as in others. In fact, wehave somewhat understated the problems with measuring correlations. AsMcNeil, Frey, and Embrechts (2005, 98) argue, the problem of measuringcorrelation is even more difficult for fat-tailed distributions. Since financialtime series generally are fat-tailed, this is a practical consideration.

The only reliable way to test for and measure changing correlations isto specify formal statistical models of how the correlation changes and thentest these models.

Changing correlations is no doubt important, but the clustering of jointextreme events mentioned earlier and the tail behavior discussed under cop-ulas in the prior section and Section 9.4 are also important in this context.The view that ‘‘correlations all go to one in times of market stress’’ will becaptured by joint distributions that exhibit tail dependence. The usual mul-tivariate Student t distribution exhibits this kind of tail dependence, and thesimple mixture of normals mimics such behavior.

Fat Tails The final stylized fact is that financial time series have fat tails. Aswe have discussed this in various places already, I do not discuss it any fur-ther here.

Overv i ew o f F i t t i n g Vo l a t i l i t y

We now turn to fitting volatility and answering some of the questions raisedat the beginning of this section.

Use Changes First and most important, we need to look at changes. Asmentioned earlier, changes are uncorrelated (speaking very loosely, they areindependent). This means

level tomorrow ¼ level todayþ change

In technical jargon, the levels will be autoregressive and we want to focuson the independent changes. We should never look at the volatility or corre-lation of levels.

248 QUANTITATIVE RISK MANAGEMENT

Page 268: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:55 Page 249

As a general rule of thumb, we want to look at percent changes orchanges in logs (which is for most purposes the same). For most prices, FXrates, equity indexes, and so on, what matters is the percent change ratherthan the absolute change. The S&P index today is roughly 1000 and achange of 10 points means a change in wealth of 1 percent. In 1929, theS&P composite was around 20 and a change of 10 points would have beena change of 50 percent. Comparing changes in the S&P index over time, weabsolutely need to look at percent changes or log changes.

The one possible exception is changes in yields, where it may makesense to look at absolute changes, changes in yields measured in basispoints. There is considerable debate as to whether one should considerchanges in log yields or changes in yields as more fundamental, and thereis no settled answer.

Simplest—Fixed Period The simplest approach is to use a fixed period andcalculate the volatility and covariances. The period should be long enoughto have some stability in the estimated volatility but short enough to capturechanges in volatility.

As mentioned before and well-known from any statistics text, the stan-dard estimator of the volatility would be:

Volatility ¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

1

n� 1

X

i¼1;n

Drf i � Drf� 2

s

;Mean ¼ Drf ¼ 1

n

X

i¼1;n

Drf i

and the elements of the variance-covariance matrix would be:

Covariance ¼ 1

n

X

i¼1;n

Drf 1;i � Drf 1

Drf 2;i � Drf 2

where {Drf1, . . . , Drfn} is a set of n observations on risk factor changes.(The correlation matrix is calculated from the variance-covariance matrixby dividing by the volatilities.)

The question is, how many observations n to use? Thirty is probablytoo few, 125 (roughly a half-year) might be a minimum. With 30 observa-tions, there will be considerable random sampling variability. Let us say thetrue volatility for some risk factor (say the CAC equity index we have beenconsidering) is 20 percent per year. Using the formula for the small-samplevariation in the variance given in Appendix 8.1, with 30 observations the2.5 percent confidence bands will be 15 percent and 25 percent. That is,there is a 5 percent chance that using 30 observations we would calculate

Risk and Summary Measures: Volatility and VaR 249

Page 269: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:56 Page 250

the volatility to be less than 15 percent or more than 25 percent. For125 observations, in contrast, the 2.5 percent confidence bands wouldbe 17.5 percent and 22.5 percent. Still wide, but substantially less.

More Sophisticated—Exponential Weighting More sophisticated would be touse exponential weighting, weighting recent historical observations moreheavily than more distant observations. This is the approach used by Risk-Metrics (Mina and Xiao/RiskMetrics 2001). The formulae will be (showingthe corresponding formulae for equally weighted):

Exponentially weighted: Equally weighted:

s ¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

wX

i¼1;n

li�1�

Drf i � Drf�2

s

s ¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

1

n� 1

X

i¼1;n

Drf i � Drf� 2

s

s21;2 ¼ w

X

i¼1;n

li�1�

Drf 1;i � Drf 1�

Drf 2;i � Drf 2�

s21;2 ¼ 1

n

X

i¼1;n

Drf 1;i � Drf 1�

Drf 2;i � Drf 2�

w ¼ 1P

i¼1;n

li�1¼ 1� l

1� lnfor jlj < 1

To measure how fast the decay takes effect, we can measure the half-life, or the number of periods required before the weight is ½:

n1=2 s:t: ln ¼ 0:5 ) n1=2 ¼ ln 0:5=ln l

As an example, l ¼ 0.9 ) n1/2 ¼ 6.6 or roughly six observations. This isfar too few, meaning that the decay of l ¼ 0.9 is far too fast. A decay ofl ¼ 0.99) n1/2 ¼ 69, which would be more reasonable.

Alternatively, we can solve for decay, which gives a specified half-life:

l s:t: ln ¼ 0:5 ) l ¼ expðln 0:5=nÞ

We can also solve for the fraction of the total weight accounted for by thefirst n� periods relative to the fraction accounted for with no exponentialweighting. For exponential weighting, the weight accounted for by the firstn� periods is:

Fraction of exponential weight in first n� periods ¼ ½1� ln��=½1� ln�

250 QUANTITATIVE RISK MANAGEMENT

Page 270: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:56 Page 251

Ratio of exponential weight in first n� periods to non-exponential weight

¼ ½ð1� ln�Þ=ð1� lnÞ�=ðn�=nÞ

Full Econometrics—ARCH and GARCH The most sophisticated and properway to estimate time-varying volatility is using ARCH (autoregressive con-ditionally heteroscedastic) and GARCH (generalized ARCH) models. Manytexts cover the econometrics of such models. McNeil, Frey, and Embrechts(2005, ch. 4) and Anderson (2001) review the econometrics and the applica-tion to financial time series.

I do not cover these because they are beyond the scope of this book.More importantly, although these are the theoretically appropriate models,they are not practical in a multivariate context for more than a few varia-bles. Since most practical risk measurement applications work with tens orhundreds of risk factors, these models are usually not practical.

The Curse of Dimensionality One major problem in practical applications isin estimating the variance-covariance or correlation matrix. The number ofparameters we need to estimate grows very fast, faster than the number ofhistorical observations.

There will usually be many risk factors, on the order of tens or hun-dreds. For k risk factors, there will be (k2 þ k)/2 independent parameters inthe variance-covariance matrix. This number gets large quickly. For 100risk factors, we will have over 5,000 parameters to estimate, with only lim-ited data. If we use two years of daily data, roughly 500 periods, we willhave roughly 50,000 observations. Speaking very roughly, this means only10 observations per estimated parameter—a very small number.

In practice this shows up as poorly estimated elements of the variance-covariance matrix or correlation matrix. More specifically, the estimatedvariance-covariance matrix may not be positive semi-definite (a require-ment for a variance-covariance matrix, the matrix analogue of s2 � 0).

There are various ad hoc methods for dealing with this problem, but Ido not discuss them here.

8 .6 UNCERTA INTY AND RANDOMNESS—THEI L LUS I ON OF CERTA INTY

Uncertainty and randomness enters all aspects of finance and enters intoboth the estimation and the use of volatility and VaR; the maxim that thereis nothing certain but death and taxes is as true for VaR and volatility as for

Risk and Summary Measures: Volatility and VaR 251

Page 271: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:56 Page 252

any aspect of finance. There is, however, a natural tendency to fall into an‘‘illusion of certainty’’; because we have measured the volatility or VaR,somehow the future has become less random. Much of the first section ofthis book focused on the notion that human intuition and our natural train-ing often do not prepare us well to recognize and manage randomness.

Uncertainty and randomness will enter into quantitative risk measure-ment in a number of different ways:

& First, any measurement, say of volatility or VaR, is an estimate, basedin some manner on history and various assumptions. Like any estimate,it is subject to statistical uncertainty resulting from various sources,some easy to quantify, some impossible. Among the sources of uncer-tainty will be:& Measurement error, for example, if P&L is reported incorrectly, late,

or not at all for some period or some portion of the portfolio& Finite data samples and the resulting standard statistical uncertainty& Incorrect or inappropriate assumptions or approximations& Outright errors in programming or data collection

& Second, the world changes over time so that an estimate based on his-tory may not be appropriate for the current environment

& Finally, actual trading results and P&L for particular days will be ran-dom and so we will always have results that exhibit variability from oraround parameters, even if those parameters were known exactly.

Var i a b i l i t y i n 1 percen t VaR f or 100 - day P&L

I focus in this section on the last of these uncertainties, the inherent uncer-tainty existing even when VaR and volatility are known. Even when theVaR is known with certainty, there will still be variability in the observedtrading results. This probabilistic variability in P&L is relatively easy to vi-sualize using the 1%/99% VaR and the P&L experienced over a 100-dayperiod. Let us pretend that the P&L is normally distributed and that wehave somehow estimated the VaR with no error. The true 1%/99% VaRwill be �2.33s. For the 100 trading days, we would anticipate:42

& There should be one day with P&L as bad or worse than �2.33s& The worst P&L and the empirical quantile (the average of the first and

second-worst P&L) should not be too far from �2.33s

42When we observe 100 trading days, the empirical 1 percent quantile is in-determinate between the first and second-smallest observations and the average be-tween the two is a reasonable estimate.

252 QUANTITATIVE RISK MANAGEMENT

Page 272: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:57 Page 253

Neither of these is fully borne out. There is only a 37 percent chancethere will be one day with P&L as bad or worse than �2.33s. There isactually a 27 percent chance of two or more days and 36 percent chance ofno days worse than �2.33s.43

The P&L for the worst trading day and the empirical quantile will alsodeviate from the value �2.33s. The P&L for the worst trading day hasa 10 percent probability to be outside of the range [�3.28s, �1.89s], arather wide band. Remember that under a normal distribution the probabil-ity of observing a loss of �3.28s or worse would be only 0.05 percent, soone would think of this as an unlikely event, and yet as the worst loss in a100-day period, it has probability of 5 percent and is not particularly un-likely.44 The empirical quantile (average of first and second-worst tradingday) has a 10 percent probability to be outside of the range [�2.92s,�1.82s], again a rather wide range around the true value of �2.33s.

Remember that these results are not because the VaR is wrongly esti-mated but simply result from inherent variability and uncertainty. Weshould always expect to see variability in actual trading results, and in thisexample, we have simply calculated how much variability there will be.

This example understates the variability we could expect to see in ac-tual trading results, for a number of reasons:

& We generally will not know the true value for the VaR, only an estimatethat is itself uncertain.

& We will generally not know the true distribution of P&L with certaintyor confidence. Trading results in the real world appear to exhibit fattails relative to a normal distribution. As a result, the worst P&L ob-served over a period will likely be worse than assumed in this example.

& The world is nonstationary with circumstances changing constantlyand so estimates based on the past may not be fully representative ofthe future.

43 This will be a process of repeated draws, Bernoulli trials with p ¼ .01, q ¼ .99.P[no draw > 1%VaR] ¼ .99101 ¼ 0.3624. P[k draws] ¼ Comb(n,k)pkqn–k, )P[1 draw] ¼ 0.3697, ) P[2 or more] ¼ 0.2679. This result will hold generally forthe 1%/99% VaR and not just when returns are normally distributed, as long as theVaR is correct.44 Analytic formula for distribution of the maximum from n normals is F(x)n anddensity fn is nf(x)F(x)n–1. P[P&L < 3.283�s] ¼ 0.999487. P[Max from 100 <3.283�s] ¼ 0.99487100 ¼ 0.95 ) P[Max from 100 > 3.286�s] ¼ 1 – .95 ¼ 0.05.But P[Standard Norm > 3.283] ¼ 0.000513.

Risk and Summary Measures: Volatility and VaR 253

Page 273: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:57 Page 254

For these and other reasons, we should expect to see more variabilitythan this, but at least this example gives a sense of how much the P&Lmay vary. One should always keep in mind that simply calculating anumber (such as VaR) does not mean the variability has been controlled,simply that we have some, possibly rough, idea of how much variabilitythere may be.

8 . 7 CONCLUS ION

This chapter has covered the mathematics behind the standard risk mea-surement tools, focusing on volatility and VaR. Chapter 9 turns to usingthese tools for a particularly simple portfolio, the U.S. Treasury bond andthe CAC equity index futures we introduced in Chapter 1.

A full understanding of the details presented in this chapter requires afair degree of technical sophistication. We should never lose sight of thefact, however, that the ideas are straightforward. Using the tools presentedhere requires an understanding of the concepts but does not necessarily re-quire command of every technical nuance.

APPEND I X 8 . 1 : SMALL - SAMPLE D I STR I BUT I ONOF VaR AND STANDARD ERRORS

Cram�er (1974, para 28.5 and para 28.6) discusses the distribution of ob-served quantiles and extreme values or order statistics.

D i s t r i b u t i o n o f Quan t i l e s

Consider a sample of n observations {xi} from a one-dimensional distribu-tion, with the Z percent quantile qz (for example, we might have Z ¼ 0.01and for a standard normal distribution the quantile qz ¼ �2.3263). If n�Zis not an integer and the observations are arranged in ascending order, {x1 �x2 � . . . � xn}, then there is a unique quantile equal to the observedvalue xmþ1 where m ¼ integer smaller than n�Z.45 Then the density of the

45 If n�Z is an integer, then the quantile is indeterminate between xnz and xnzþ1. Forexample, with Z ¼ 0.01 and n ¼ 100, n�Z ¼ 1 and the 1 percent quantile is in-determinate between the first and second observations. The average of these twomakes a good choice, but I do not know any easy distribution for this average.

254 QUANTITATIVE RISK MANAGEMENT

Page 274: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:57 Page 255

observed quantile (xmþ1) is

gðxÞdx ¼ nm

� �

n� mð Þ FðxÞð Þm 1� FðxÞð Þn�m�1f ðxÞdx ð8:2Þ

where F(x)¼ underlying distribution functionf(x)¼ density function

This expression can be integrated numerically to find the mean, vari-ance, and any confidence bands desired for any quantile, given an underly-ing distribution F(x). But the use of (8.2) is limited because it applies only tosample quantiles when n�Z is not an integer. With 100 observations theZ ¼ .01 quantile will be indeterminate between the first and second obser-vations and formula (8.2) cannot be used. Either the first or the second ob-servation could be used to estimate the 1 percent-quantile, and expression(8.4) below could be applied, but neither the first nor the second observa-tion is ideal as an estimator of the 1 percent quantile because both arebiased. For the first, second, and the average of the first and second, themean and standard error will be:46

Mean Std Error

1st observation �2.508 0.429Avg 1st and 2nd �2.328 0.3362nd observation �2.148 0.309

An alternative, and easier, approach is to use the asymptotic expressionderived by Cram�er (and also quoted by Jorion [2007, 126], referencingKendall [1994]) Cram�er shows that asymptotically, the sample quantile isdistributed normally:

N qz;1

f qz� �� �2

zð1� zÞn

!

ð8:3Þ

Using equation (8.3) with 100 observations, an underlying normaldistribution, Z ¼ 0.01, qz ¼ �2.3263, f(qz) ¼ 0.0267, the asymptotic

46 For the first and second observations, the density (8.4) is integrated numerically.For the average of the two, I simulated with 1 million draws from a pseudo-randomnumber generator.

Risk and Summary Measures: Volatility and VaR 255

Page 275: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:57 Page 256

standard error of the quantile will be

1

f qz� �

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

zð1� zÞn

r

¼ 1

:0267

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

:01�:99100

r

¼ 0:373

Note that the asymptotic formula does not give a terribly wrong answerfor the average of the first and second observations (0.336 by simulation),even with only 100 observations.

D i s t r i b u t i o n o f E x t remes (Order S t a t i s t i cs )

Consider a sample of n observations {xi} from a one-dimensional distribu-tion, as in the preceding section. Now consider the nth-from the top obser-vation (so for 100 observations n ¼ 1 is the largest, n ¼ 100 is the smallest,and n ¼ 99 is the second-smallest). The density will be:

gðxÞdx ¼ nn� 1n� 1

� �

FðxÞð Þn�n 1� FðxÞð Þn�1f ðxÞdx ð8:4Þ

Once again, this expression can be numerically evaluated to provide themean, variance, and any confidence bands.

We can also use this expression to graph the distribution of the P&Lthat we would observe on an extreme day. As mentioned in Section 8.2, theVaR might better be termed the ‘‘statistically best-case loss’’ rather thanworst-case loss. This is because when we actually experience, say, the worstout of 100 days, the P&L will have a large probability of being worse thanthe VaR and a much smaller chance of being better. Consider the 1%/99%VaR. For a normal distribution, this will be �2.326. For our $20 millionbond holding, the VaR is roughly �$304,200. We should expect to see thisloss roughly 1 out of 100 days. But what will we actually see on the worstout of 100 days? Figure 8.13 shows the distribution of the worst out of 100days, assuming that the underlying P&L distribution is normal. Panel Ashows the loss for a standardized normal (s ¼ 1, 1%/99% VaR ¼ �2.326)while Panel B shows the loss for the $20 million bond position (s ¼$130,800, 1%/99% VaR ¼ �$304,200). The graph shows that there is agood chance the P&L will be worse than �$304,200. In fact the P&L onthat 1-out-of-100 day will be worse than �$304,200 with 63 percent prob-ability and better than �$304,200 with only 37 percent probability.

D i s t r i b u t i o n o f Var i a nce

For a normal distribution, the 15.866 percent/84 percent VaR is the samevalue as the volatility. The sampling distribution of the volatility and the

256 QUANTITATIVE RISK MANAGEMENT

Page 276: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:57 Page 257

15.866 percent quantile will, however, be quite different. The volatility iscalculated using all observations according to the formula:

Variance ¼ 1

n� 1

X

xi � �xð Þ2

The sample Z percent-quantile sz, in contrast, is calculated from theordered observations according to

FðszÞ ¼ Z

Or at most a proportion Z of the observations are less than sz andat least a proportion (1�Z) of the observation are equal to or greaterthan sz.

If the sample variance is s2, then the small-sample distribution of(n � 1)s2/s2 is chi-squared (see, for example, Kmenta 1971, 139 ff). We can

FIGURE 8.13 Distribution for P&L for Worstout of 100 Days

Risk and Summary Measures: Volatility and VaR 257

Page 277: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:59 Page 258

then determine the probabilities of the sampling distribution as:

P a � ðn� 1Þ s2

s2� b

¼ P s2 a

n� 1

� s2 � s2 b

n� 1

� � �

¼ P a � x2 � b�

Asymptotically the sampling distribution of s2 is normal:

s2 � N s2;2s4

n� 1

� �

s2

s2� 1

� �

ffiffiffiffiffiffiffiffiffiffiffiffi

n� 1

2

r

� N 0; 1ð Þ

so that asymptotically

P a � s2

s2� 1

� �

ffiffiffiffiffiffiffiffiffiffiffiffi

n� 1

2

r

� b

" #

¼ P s2 a

ffiffiffiffiffiffiffiffiffiffiffiffi

2

n� 1

r

þ 1

!

� s2 � s2 b

ffiffiffiffiffiffiffiffiffiffiffiffi

2

n� 1

r

þ 1

!" #

¼ P a � N � b½ �

Compar i son o f Vo l a t i l i t y and VaR(Quan t i l e ) D i s t r i b u t i o ns

Using the distributions for the variance and the quantile given here, we cancompare the sampling distribution for estimated volatility (square root ofthe variance) and estimated VaR (quantile) from a normal distribution.

There is a subtle question here, which is how exactly do we estimate theVaR? There are two common ways:

& From the volatilitya. Assume a functional form for the P&L distribution.b. Estimate the volatility from the data.c. Calculate the VaR as a multiple of the volatility.

& As the empirical quantilea. Estimate the VaR directly as the appropriate quantile of the

empirical distribution.

258 QUANTITATIVE RISK MANAGEMENT

Page 278: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:59 Page 259

In the first case, the sampling distribution of the VaR will inherit that ofthe volatility (the VaR is the volatility grossed up by a fixed multiple—theVaR is the quantile of the assumed distribution and is conditional onthe assumed functional form for the P&L distribution). The second caseis the comparison we are really concerned with.

We will first compare the volatility and the 15.866 percent quantile,which for a standard normal distribution both have value 1.0. For 100 obser-vations, we estimate the volatility by the usual estimator for the standard de-viation, and the 15.866 percent quantile by the fifteenth-from-the-bottomobservation (the empirical 15.866 percent quantile). The sampling distribu-tion for the volatility is tighter than for the quantile. Table 8.7 and Figure8.14 show the lower and upper 5 percent confidence bands (so that 90 per-cent of the probability mass is between the 5 percent and 95 percent level).These show that there is a 5 percent chance the volatility will be below 0.882and a 5 percent chance the fifteenth observation will be below 0.806.47

The more interesting comparison, however, is between the VaR esti-mated by way of the volatility versus the VaR estimated by the empiricalquantile for realistic values of VaR (say, 1 percent/99 percent VaR).Consider the 1 percent/99 percent VaR for 255 observations (roughly oneyear of trading days). The empirical 1 percent quantile will be the secondobservation. Table 8.8 shows the confidence bands for the VaR estimatedfrom the volatility and from the second observation.48

TABLE 8.7 Comparison of Sampling Distribution for Volatility and VaR,100 Observations

5% Level Mean 95% Level

Volatility (finite-sample) 0.882 1.000 1.116Volatility (asymptotic) 0.875 1.000 1.11115th obs (15.866% quantile) 0.806 1.055 1.31215th–16th avg� 0.789 1.03 1.286Asymptotic 0.849 1.000 1.151

�Calculated as average between fifteenth and sixteenth values, by simulation.

47The fifteenth observation is the empirical quantile, but it is also biased with mean1.055. The average of the fifteenth and sixteenth observations, with mean 1.03, isalso shown, calculated by simulation. The sixteenth observation has mean 1.012and confidence bands 0.767/1.265.48 The average for the second observation is 2.501 (instead of 2.326). The averagefor the second and third is 2.412.

Risk and Summary Measures: Volatility and VaR 259

Page 279: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:49:59 Page 260

Tru

e vo

lati

liaty

& V

aR =

1.0

0

1.0

0.8

0.9

0.7

1.1

1.2

1.3

VaR

(qua

ntile

)V

ol (s

td d

ev)

fini

te/

asym

pV

aR(q

uant

ile)

Vol

(std

dev

)fi

nite

/as

ymp

FIGU

RE8.14

Confidence

BandsforVolatility

and15.866PercentVaR,100observations

260

Page 280: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:50:0 Page 261

The empirical quantile (the second observation) has confidence bandsthat are extremely wide relative to those for the volatility (�10 percent/þ28 percent of the true value). This is hardly surprising. If the distributionis in fact normal, then it will be efficient to use all observations, obtain aprecise estimate of the volatility, and then infer the quantile. Relying ononly the observations in the far end of the tail, conditional on the distribu-tion being normal, produces a less efficient estimate. (Also note that the as-ymptotic confidence bands are not very representative of the actualconfidence bands.)49

There is another context in which we may wish to examine the sam-pling distribution of the volatility or the VaR. Say that we knew the valueof the volatility or VaR, and we wanted to calculate the probability of someparticular trading results. For example, we may want to assess the probabil-ity of volatility over a year being within a particular range, or the probabil-ity that the largest loss over 100 trading days will be worse than a chosenvalue. The distributions given earlier give the answer to that question. Thereis a subtle point here with respect to VaR. No matter how the VaR is esti-mated and no matter what the VaR sampling distribution is, the distribu-tion of the observed empirical quantile (conditional on the VaR value) willbe given by (8.1) or (8.3). Observed P&L results, for example, the observedquantile over a trading period, will have the sample distribution of thequantile (VaR) and not the volatility.

TABLE 8.8 Comparison of Sampling Distribution for 1 Percent/99 Percent VaREstimated from Volatility and by Empirical Quantile—255 Observations

5% Level Mean 95% Level

VaR from volatility (finite-sample) 2.156 (�07.3%) 2.326 2.495 (þ07.3%)VaR from volatility (asymptotic) 2.150 (�07.6%) 2.326 2.490 (þ07.1%)2nd obs (1% quantile) 2.086 (�10.3%) 2.501 2.990 (þ28.5%)2nd–3rd avg� 2.040 (�12.3%) 2.412 2.830 (þ21.7%)Asymptotic 2.093 (�10.0%) 2.326 2.559 (þ10.0%)

�Calculated as average between second and third values, by simulation.

49 Crouhy, Galai, and Mark (2001, 245–246) do not carefully distinguish betweenthe sampling distribution for VaR estimated by way of volatility versus VaR esti-mated by the empirical quantile. As a result, their comparison of the VaR estimatedby way of volatility and by way of the empirical quantile is not correct, and theirstatement that the test based on the standard error of the quantile is less powerfulthan the chi-square test is not correct.

Risk and Summary Measures: Volatility and VaR 261

Page 281: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:50:0 Page 262

APPEND I X 8 . 2 : S ECOND DER I VAT I V ES ANDTHE PARAMETR I C APPROACH

One of the biggest drawbacks with the parametric or linear estimation ap-proach is that it cannot capture nonlinear instruments well. This is not usu-ally a fatal flaw, as most portfolios will be at least locally linear and theparametric approach can provide useful information.50 More problematic,however, is that with the standard approach, there is no way to confirm thatnonlinearities are small or tell when they are large enough to make a differ-ence (thus requiring an alternate approach).

I discuss in this section a way to estimate the effect of nonlinearity in theasset payoff using second derivative (gamma) information from the originalassets or risk factors; a measure in particular that indicates when linearityfails to provide a good summary of the P&L distribution.51 Although this ismore involved than the straightforward calculation of the portfolio vari-ance, it is orders of magnitude less computationally intensive than MonteCarlo techniques. This measure provides a good indicator for the break-down of linearity but not necessarily a good estimate for the size of the non-linearity effect.

To lay out the ideas, consider first the univariate case with a single riskfactor, where f represents risk factor changes. We assume that the risk fac-tor changes are normal. Since we are particularly focused on whether andhow nonlinearity in asset payoff translates into deviations from normality,assuming normality in the risk factor distribution (and then examining devi-ations from normality in the resulting P&L distribution) is the appropriateapproach.

50 To quote Litterman (1996, 53): ‘‘Many risk managers today seem to forget thatthe key benefit of a simple approach, such as the linear approximation implicit intraditional portfolio analysis, is the powerful insight it can provide in contexts whereit is valid. With very few exceptions, portfolios will have locally linear exposuresabout which the application of portfolio risk analysis tools can provide usefulinformation.’’51 I have seen discussion of using second derivatives to improve estimates of the vari-ance (for example, Crouhy, Gailai, and Mark 2001, 249 ff and Jorion 2007, ch. 10)and mention of using an asymptotic Cornish-Fisher expansion for the inverse pdf toimprove the VaR critical value. I have not seen the combination of using the skewand kurtosis together with the Cornish-Fisher expansion to examine the effect ofnonlinearity on the P&L distribution. This is nonetheless a straightforward idea andmay have been addressed by prior authors.

262 QUANTITATIVE RISK MANAGEMENT

Page 282: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:50:1 Page 263

The portfolio P&L p will be approximated by:

p df þ 1=2gf 2 ð8:5Þ

This will not be normal because of the term f 2, and the important ques-tion is how large are the deviations from normality. The idea is to calculatethe first three higher moments of the distribution (variance, skew, kurtosis)and then use an asymptotic Cornish-Fisher expansion of the inverse pdf toexamine how seriously the quantiles (VaR) for this distribution deviatefrom those of a normal.

If we assume that Equation 8.5 is an exact expression (third andhigher order derivatives are zero), then the higher order products of theP&L will be:52

p2 ¼ d2f 2 þ 1=4 g2f 4 þ dgf 3

p3 ¼ d3f 3 þ 1=23 g3f 6 þ 1=4 3dg2f 5 þ 3=2 d2gf 4

p4 ¼ d4f 4 þ 1=24 g4f 8 þ 4=2 d3gf 5 þ 4=23 dg3f 7 þ 6=4 d2g2f 6

p5 ¼ d5f 5 þ 1=25 g5f 10 þ 5=2 d4gf 6 þ 5=24 dg4f 9 þ 5=2 d3g2f 7 þ 5=4 d2g3f 8

Assuming that f is mean zero and normally distributed, the expectationfor all odd-ordered terms in f will be zero, and the even terms will be E[f j] ¼j!/(j/2)! � sj/2j/2. This gives the expectations:

1st: E½p� ¼ 1=2 gs2

2nd: E½p2� ¼ d2s2 þ 3=4 g2s4

3rd: E½p3� ¼ 15=8 g3s6 þ 9=2 d2gs4

4th: E½p4� ¼ 3d4s4 þ 105=16 g4s8 þ 45=2 d2g2s6

5th: E½p5� ¼ 945=32 g5s10 þ 75=2 d4gs6 þ 525=4 d2g3s8

The central moments of p will be:

1st: E½p� ð8:6aÞ

2nd: E½p2� � ðE½p�Þ2 ð8:6bÞ

3rd: E½p3� � 3 � E½p2� � E½p� þ 2 � ðE½p�Þ2 ð8:6cÞ

52 This essentially says we ignore terms with third and higher derivatives, eventhough they would enter with the same order in f as terms we are otherwiseincluding.

Risk and Summary Measures: Volatility and VaR 263

Page 283: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:50:1 Page 264

) skew ¼ fE½p3� � 3�E½p2��E½p� þ 2�ðE½p�Þ2g=fE½p2� � ðE½p�Þ2g1:5

4th: E½p4� � 4�E½p3��E½p� þ 6�E½p2��ðE½p�Þ2 � 3�ðE½p�Þ4 ð8:6dÞ

) kurtosis ¼ fE½p4� � 4�E½p3��E½p� þ 6�E½p2��ðE½p�Þ2

�3�ðE½p�Þ4g=fE½p2� � ðE½p�Þ2g2

) excess kurtosis ¼ fE½p4� � 4�E½p3��E½p� þ 6�E½p2��ðE½p�Þ2

�3�ðE½p�Þ4g=fE½p2� � ðE½p�Þ2g2 � 3

5th: E½p5� � 5�E½p4��E½p� þ 10�E½p3��ðE½p�Þ2 � 10�E½p2��ðE½p�Þ3 þ 4

�ðE½p�Þ5ð8:6eÞ

For the univariate case, the central moments mi will be:

1st: 1=2 gs2

2nd: d2s2 þ 1=2 g2s4

3rd: g3s6 þ 3 d2gs4

) skew ¼ ðg3s6 þ 3 d2gs4Þ=ðd2s2 þ 1=2 g2s4Þ1:5

4th : 3d4s4 þ 15=4g4s8 þ 15 d2g2 s6

)excess kurtosis ¼ ð3d4s4þ15=4 g4 s8þ15 d2g2s6Þ=ðd2s2þ1=2 g2s4Þ2�3

5th : 30gd4s6 þ 17 g5s10 þ 85 d2g3s8

) 5th cumulant k5 ¼ m5 � 10m3m2 ¼ 60g3d2s8 þ 12 g5s10

Once the variance, skew, and kurtosis have been calculated, one canevaluate whether they are large enough to make a substantial difference byevaluating the approximate quantiles of the P&L distribution and compar-ing them with the normal.

264 QUANTITATIVE RISK MANAGEMENT

Page 284: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:50:1 Page 265

The Cornish-Fisher expansion for the inverse of a general pdf functionF(.) can be used to evaluate the (approximate) quantiles for the P&L distri-bution, accounting for skew and kurtosis. These quantiles can be comparedto the normal quantiles. If they are substantially different, then we can inferthat the nonlinearity of asset payoffs has a substantial impact on the P&Ldistribution; if they do not differ substantially, then the nonlinearity of thepayoff has not substantially altered the P&L distribution relative tonormality.

The Cornish-Fisher expansion is an asymptotic expansion for the in-verse pdf for a general distribution function. The terms up to third order(terms of the same order are in square brackets) are53

w xþ ½1=6� ðx2 � 1Þ �m3� þ ½1=24� ðx3 � 3xÞ �m4 ð8:7Þ

�1=36� ð2x3 � 5xÞ �m23� þ ½1=120� ðx4 � 6x2 þ 3Þ � g3

�1=24� ðx4 � 5x2 þ 2Þ �m3 �m4

þ1=324� ð12x4 � 53x2 þ 17Þ �m31�

y¼ m þ sw is solution to inverse pdf: F(y) ¼ prob, that is, approx-imate Cornish-Fisher critical value for a probability level prob

x¼ solution to standard normal pdf: F(x) ¼ prob, that is, the criti-cal value for probability level prob with a standard normal dis-tribution (note that this is the lower tail probability so that x ¼�1.6449 for prob ¼ 0.05, x ¼ 1.6449 for prob ¼ 0.95).

m3¼ skewm4¼ excess kurtosisg3¼ k5/s

5

k5¼ 5th cumulant from before

Care must be exercised when the skew and kurtosis are large enough toindicate a breakdown of the linear approximation. The Cornish-Fisherexpansion can indeed be used to approximate the quantiles, but the accu-racy of the approximation given by Equation 8.7 will not be very goodwhen the skew and kurtosis are large. The Cornish-Fisher expansion is as-ymptotic and, particularly for quantiles far out in the tails, requires manyterms when the distribution deviates substantially from normal. The

53 cf. Abramowitz and Stegun (1972, 935, 0. 238) under ‘‘Cornish-Fisher asymptoticexpansion,’’ where they express it in terms of Hermite polynomials.

Risk and Summary Measures: Volatility and VaR 265

Page 285: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:50:2 Page 266

truncated expansion in (8.7) will not be good for large values of skew andkurtosis, as seen in the example discussed in Chapter 9. In this sense, thecurrent approach provides an indicator for when linearity breaks down, butnot necessarily an effective approximation; in such cases either the histori-cal or Monte Carlo methods must be used.

For the univariate case, this complicated approach is unnecessary; onecan compare the delta and gamma (first and second derivatives) directly.For the multivariate case, however, it is impossible to evaluate the effect offirst and second derivative terms without accounting for covariance effects;the calculation of the portfolio skew and kurtosis is the only effectiveapproach.

For the multivariate case, assume the risk factor changes are a jointlynormal vector F. The P&L will be, approximately:

p D0F þ 1=2FGF ð8:8Þ

If we assume this is an exact expression, and calculate the moments asbefore (retaining only terms that will be nonzero for a multivariate normal):

1st: E½p� ¼ 1=2P

ijgijsij

2nd: E½p2� ¼ P

ijdidjsij þ 1=4P

ijklgijgklsijkl

3rd: E½p3� ¼ 1=8P

ijklmngijgklgmnsijklmn þ 3=2P

ijkldidjgklsijkl

4th: E½p4� ¼ P

ijkldidjdkdlsijkl þ 1=8P

ijklmnpqgijgklgmngpqsijklmnpq

þ 3=2P

ijklmndidjdklgmnsijklmn

For a multivariate normal, the central moments for k > 2 can all beexpressed in terms of sij (Isserlis 1918 and see Wikipedia entry under ‘‘Mul-tivariate Normal Distribution’’):

k-th moment ¼ 0 for k odd

k-th moment ¼X

ðsij skl . . .srsÞ for k even;where the sum is taken over all

allocations of the set f1; . . . ;kg into k=2 ðunorderedÞ pairs:

For example, for 4th order:

siiii ¼ E½X4i � ¼ 3s2

ii

siiij ¼ E½X3i Xj� ¼ 3siisij

266 QUANTITATIVE RISK MANAGEMENT

Page 286: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:50:2 Page 267

siijj ¼ E½X2i X

2j � ¼ siisjj þ 2ðsijÞ2

siijk ¼ E½X2i XjXk� ¼ siisjk þ 2sijsik

sijkn ¼ E½XiXjXkXn� ¼ sijskn þ siksjn þ sinsjk

This is messy but can be programmed without too much trouble.The distribution of P&L is univariate and so the portfolio variance,

skew, and kurtosis will be scalars and the central moments will be given bythe same expressions as before (Equation 8.6a). The Cornish-Fisher expan-sion (Equation 8.7) can be applied to the overall portfolio just as for theunivariate risk factor case to evaluate whether the skew and kurtosis aresmall enough that the linear approach is valid, or large enough to requireserious attention to nonlinearity.

Risk and Summary Measures: Volatility and VaR 267

Page 287: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C08 02/14/2012 13:50:2 Page 268

Page 288: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C09 02/15/2012 13:25:54 Page 269

CHAPTER 9Using Volatility and VaR

We discussed in Chapter 8 the standard tools used in quantitative riskmeasurement—primarily volatility and VaR. In this chapter, we apply

these tools to measuring the market risk of a simple portfolio of twoassets—one government bond and one equity futures. The idea is to showhow the tools are applied in practice by way of an example. We will roughlyparallel the structure of Chapter 8, providing simple examples of the topics.

9 .1 S IMPLE PORTFOL I O

Let us consider a portfolio made up of a government bond and an equityindex futures (the same portfolio considered in Chapter 1):

& Own $20M U.S. Treasury 10-year bond.& Long D7M nominal of CAC futures (French equity index).

We can take this as a simple example or analogue of a trading firm,with the bond representing a fixed-income trading desk or investmentportfolio and the futures representing an equity trading desk or investmentportfolio. In a real firm, there would be many positions but the simplicity ofthe portfolio allows us to focus on the techniques and tools without takingon the complexity of a real portfolio. We turn in Chapter 10 to a more com-plex portfolio, where the quantitative techniques bring value. Nonetheless,even this simple portfolio exhibits multiple risks:

& Yield risk—U.S. Treasury curve.& Equity risk.& Operational risk.

& Delivery risk for bond.& Position-keeping and reconciliation for the futures.

269

Page 289: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C09 02/15/2012 13:25:54 Page 270

9 .2 CALCULAT ING P&L D I STR I BUT I ON

In Chapter 8, we discussed volatility and VaR, using as an example a nor-mal distribution with volatility $130,800. This is actually the P&L distribu-tion for the U.S. Treasury bond position (treated on its own) estimatedusing the parametric or delta-normal approach. I will try to use this as anexample to illustrate how we calculate the P&L distribution and from thatthe VaR and volatility.

We will follow the four steps outlined in Section 8.3 ‘‘Methods for Esti-mating the P&L Distribution’’:

1. Asset to Risk Factor Mapping—Translate from assets held into marketrisk factors.

2. Risk Factor Distributions—Estimate the range of possibilities for themarket risk factors.

3. Generate the Portfolio P&L Distribution—Generate risk factor P&Land sum to produce the overall portfolio P&L distribution.

4. VaR, Volatility, and so on—Estimate the VaR, volatility, or other de-sired characteristics of the P&L distribution.

We will estimate the P&L distribution (and the VaR and so on) usingthe parametric, historic, and Monte Carlo approaches.

Ca l cu l a t i n g Vo l a t i l i t y and VaRfor S i n g l e Bond Pos i t i o n

The U.S. Treasury position is long $20 million notional of the 10-year U.S.Treasury bond (the 3.75 percent of November 15, 2018). Given just thisinformation, it is hard to have a firm idea of the risk for this position. Afterwe estimate the P&L distribution, however, we will end up with a good ideaof the risk under normal trading conditions.

The portfolio is evaluated as of January 27, 2009, so that all prices,yields, volatilities, and so on, are taken as of January 27, 2009. The goal isto find the distribution of the P&L for the portfolio, which in this case is justthe single bond. We will consider the P&L over one day, going from the27th to the 28th. The bond price on the 27th was 110.533 and so we needto find some way to estimate possible prices and price changes for the 28th.The most likely outcome is probably no change, and I will assume the aver-age P&L is zero so that the P&L distribution has mean zero. But we stillneed to estimate the range of values around the mean to get the distribution.As is usually the case for estimating the P&L distribution, we can concep-tually separate the problem into two parts: the distribution of market risk

270 QUANTITATIVE RISK MANAGEMENT

Page 290: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C09 02/15/2012 13:25:55 Page 271

factors (market realizations that are independent of the firm’s actions) andthe mapping or transformation of the portfolio positions to those marketrisk factors.

Step 1—Asset to Risk Factor Mapping Step 1 is to map from assets to riskfactors. In this example, the mapping is very simple. We have one asset, the10-year U.S. Treasury. We will use one market risk factor, the yield on the10-year U.S. Treasury. The mapping is one to one, with the transformationbeing the standard yield-to-price calculation. We could use the bond priceinstead of yield but it is more convenient to use yields, since they standard-ize (at least partially) across bonds with different coupons and maturities.1

We will implement the mapping or transformation differently depend-ing on the particular estimation approach we use—parametric, historical, orMonte Carlo. At this stage, however, there is no huge difference in the threeapproaches. For all three, the aim is to translate from the actual positionswe hold—the 10-year bond—to some recognizable market risk factor—themarket yield in this case. For all approaches, we will use the bond yield-to-price function:

PðyÞ ¼X

n

i¼1

Coupi

ð1þ yÞi þPrin

ð1þ yÞn

For historical and Monte Carlo we will use the full function, while for para-metric we will use the first derivatives and a linear approximation: assumethat yield changes transform linearly into price changes and P&L. For smallyield changes, price changes are roughly proportional to yield changes:2

DP � �DV01 � DY

The DV01 is the first derivative of the yield-to-price function, called delta inan option context; thus the alternative term delta-normal for the parametricapproach.

The DV01 of the 10-year bond is about $914/bp for $1 million nomi-nal, so for a $20 million holding, the portfolio DV01 or sensitivity will beabout $18,300/bp. In other words, for each 1bp fall in yields we shouldexpect roughly $18,300 profit and for each 5bp roughly $91,500 profit(since prices and yields move inversely).

1 I will assume that readers are familiar with basic finance and investments, such asthat covered in Bailey, Sharpe, and Alexander (2000).2 See Coleman (1998b) for an overview of bond DV01 and sensitivity calculations.

Using Volatility and VaR 271

Page 291: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C09 02/15/2012 13:25:55 Page 272

For historical or Monte Carlo estimation, we will use the actual yield-to-price function. In other words, say the yield goes to 2.53 percent from2.58 percent:

Yield ¼ 2:53 percent ) Price ¼ 110:526 ) Portfolio ¼ $22;256;400

Yield ¼ 2:58 percent ) Price ¼ 110:070 ) Portfolio ¼ $22;165;200

) P&L ¼ �$91;2003

Step 2—Risk Factor Distributions Now we turn to Step 2, where we have todetermine the distribution of the market risk factor, in this case bond yields.Nobody can say with certainty what the distribution truly is, but examininghistory is always a good start. Figure 9.1 shows the empirical distributionfor daily changes in bond yields for 273 trading days (roughly 13 months).The daily changes range from �20 basis points (bp) to þ27bp, but mostchanges are grouped around zero.4

We could use this history as our distribution and have some confidencethat we were not far off the truth. For historical estimation, that is exactly

FIGURE 9.1 Distribution of YieldChanges, 10-Year U.S. Treasury, OneYear of Daily DataNote: The histogram is for one year ofdaily data. The line represents a normaldistribution with the same volatility(7.15bp per day).

3 The accrued interest on the 27th would be 0.756. Note that using the linear ap-proximation would give a P&L for a 5bp change in yields of about $91,440, notvery different.4 These data are synthetic but correspond roughly to the period January 2008through January 2009.

272 QUANTITATIVE RISK MANAGEMENT

Page 292: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C09 02/15/2012 13:25:55 Page 273

what we will do. To start, however, we consider the parametric approachand work with a parametric functional form (the normal distribution)rather than with the empirical distribution. The empirical distribution looksroughly normal and a normal distribution has been overlaid in Figure 9.1.Although the normal distribution does not fit the data perfectly, it does cap-ture the major characteristics, including the concentration around zero andthe wide dispersion of changes, both positive and negative.5 (I discuss non-normality and fat tails more further on.)

A normal distribution is simple—characterized by just the mean and thevolatility—and the normal is easy to use, being programmed into all mathe-matical and statistical packages. If we want to assume normality, the best-fitnormal distribution has mean zero and a standard deviation of 7.15bpper day.

Step 3—Generate P&L Distributions To obtain the overall portfolio P&Ldistribution, we must now translate the distribution of market yields intoportfolio P&L. This is where the three approaches—parametric, historical,and Monte Carlo—begin to differ substantially.

Parametric The parametric approach uses the approximation that, forsmall changes, price changes are roughly proportional to yield changes:

DP ¼ �DV01 � DYAs noted earlier, the portfolio DV01 or sensitivity will be about $18,300/bp. In other words, for each 1bp rise in yields, we should expect roughly$18,300 loss (since prices and yields move inversely).

The linearity of the transformation from yields to portfolio P&L, andthe assumed normality of the yield distribution means that the portfolioP&L will also be normal. The P&L distribution will be the same as theyield distribution, only blown up or multiplied by the DV01 of 18,300.Figure 9.2 shows the price or P&L distribution translated from yieldchanges. (Note that the axis is reversed relative to Figure 9.1, since largefalls in yield mean large rises in price.) Since we assume the yield distribu-tion is normal, the P&L distribution will also be normal. The translation is:

Dist’n½P&L� � Dist’n½DV01 � DY�¼ N½0; ð18;300� 7:15Þ2� ¼ N½0;130; 8002�

5The fit is not perfect but the empirical distribution is actually not too far from nor-mal. Statistically, we can reject normality but we cannot reject a Student-t with sixdegrees of freedom.

Using Volatility and VaR 273

Page 293: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C09 02/15/2012 13:25:55 Page 274

We have thus arrived at a reasonable description of the P&L distribu-tion for our $20M 10-year bond position—normal with a standard devia-tion of about $130,800. This is the same distribution as in Chapter 8, PanelA of Figures 8.2 and 8.4.

Historical Calculating the historical P&L distribution is conceptuallystraightforward, and in this case quite simple:

& Choose the historical period for the market risk factors. In this exam-ple, it is the sample of 272 yield observations (slightly over one year)shown in Figure 9.1.

& Calculate the bond P&L from the market yields.

There are two important respects in which the historical approach will(or may) differ from the parametric approach. The first and most obvious isthe distribution used for the market risk factors. Refer back to Figure 9.1,which shows the empirical and a fitted normal distribution. The historicaluses the empirical (summarized by the histogram), while the parametricuses the fitted normal (the solid line).

The second point is how the portfolio P&L is calculated. The parame-tric approach uses a linear (first derivative or delta) approximation:

DP � �DV01 � DYand we could do exactly the same for the historical approach. If we used thelinear approximation, then differences in the resulting distributions between

FIGURE 9.2 Distribution of P&L, ParametricApproach, One Year of Daily DataNote: This assumes that the market risk factors(yields) are normally distributed and the bond P&Lis linearly related to yields.

274 QUANTITATIVE RISK MANAGEMENT

Page 294: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C09 02/15/2012 13:25:55 Page 275

the parametric and historical approaches would be due to the distributionof the market risk factors.

For the historical approach, however, the P&L is generally calculatedusing full revaluation, in this example, using the yield-to-price function. So,for example, the 10-year yield on January 26 was 2.65 and on January 272.53, a fall of 12bp. We calculate the bond price at these two yields and takethe difference:

Yield ¼ 2:65 percent ) Price ¼ 109:435 ) Portfolio ¼ $22;038;200

Yield ¼ 2:53 percent ) Price ¼ 110:526 ) Portfolio ¼ $22;256;400

) P&L ¼ $218;200

This gives a slightly different P&L from using a linear approximation (theresult would be $219,500), but for a bond such as this the difference isreally trivial.

There is a subtle issue here that is not obvious when we use the linearapproximation but does become apparent when doing full revaluation. Thehistorical yield of 2.65 percent is for January 26, but we actually want tocalculate what the bond price would be at that yield for January 27. In otherwords, we need to use the historical market risk factors but apply them as ifthey applied to the asset today.

The difference is only minor for yields on the 26th versus the 27th, butbecomes important when we look far back in time. Consider the 5bp fall inyields from 3.90 percent on January 3, 2008, to 3.85 percent on January 4.Our 10-year Treasury was not even issued in January 2008, but if it were, itwould have been some 10 years 10 months in maturity (versus 9 years 91/2months on January 27, 2009). At a yield of 3.85 percent on January 3,2008, it would have been trading at $99.12 and the 5bp fall in yields wouldhave corresponded to an $87,400 profit. In fact, on January 27, 2009, thebond price was $110.526 and a 5bp change in yield would correspond to a$91,200 profit.

The point is that we want to know the impact of changes in histori-cal market risk factors on today’s holdings of today’s assets, not the im-pact on historical holdings or on the assets at the time. In the currentexample, the market risk factor is the bond yield, and more specifically,changes in the yield. To assess the impact, we start with today’s yield(2.53 percent) and apply the historical changes to arrive at a hypotheticalnew yield. Going from January 3, 2008, to January 4th, the yield fell by5bp—applied to today (January 27, 2009, when yields are 2.53 percent),this would have meant a fall from 2.58 percent to 2.53 percent—a profitof $91,200.

Using Volatility and VaR 275

Page 295: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C09 02/15/2012 13:25:55 Page 276

The issue becomes even more dramatic when we consider short-datedinstruments, options in particular. Say we held a two-week option as of Jan-uary 27, 2009. In January 2008, the actual option would be one year andtwo weeks. The point of looking at the historical market risk factors is totry to estimate how a two-week option would behave under different marketconditions, not the difference between a one-year versus two-week option.

When we do apply the actual yield changes to today’s bond holding (thebond holding as of January 27, 2009), we arrive at the distribution of P&Lshown in Figure 9.3. The parametric distribution (generated assuming theyield change distribution is normal and the bond prices linear in yields) isalso shown. The two distributions do differ, and the difference is almostentirely due to the difference in the distribution of yields (the market riskfactor) rather than the linearity of the parametric approach versus full re-valuation of the historical approach. The differences will be explored moreshortly.

Monte Carlo Monte Carlo estimation for the single bond position is alsostraightforward, and parallels the historical approach.

& Assume some distribution for the market risk factor, in this case, yieldchanges. For now, we will assume normal mean-zero, but other distri-butions could be chosen.

FIGURE 9.3 Distribution of P&L, Parametricand Historical Approaches, One Year ofDaily DataNote: The solid line shows the distribution for theparametric approach (assuming yields are normaland bond P&L is linear) and the histogram showsthe historical approach (using the historical dis-tribution of yields and the fullyield-to-price function).

276 QUANTITATIVE RISK MANAGEMENT

Page 296: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C09 02/15/2012 13:25:55 Page 277

& Estimate the parameters for the parametric distribution. In this case cal-culate the volatility from the 272 observations shown in Figure 9.1. (Inother words, we will make exactly the same distributional assumptionas for the parametric approach—yield changes are normal with volatil-ity 7.15bp per day.)

& Generate a Monte Carlo finite-sample distribution for the risk factors bysimulating a large number of draws from the parametric distribution.

& Calculate the bond P&L from the market yields, just as for the histori-cal approach.

In a sense, the Monte Carlo approach is a combination of the parame-tric approach (assuming a particular parametric form for the distribution)and the historical approach (calculating the actual P&L from yieldchanges). In this particular example, there is little benefit to the MonteCarlo over the parametric approach because the distribution and portfolioare so simple. If the portfolio included more complex assets, such as optionsthat had highly nonlinear payoffs, then the benefits of the Monte Carlo ap-proach would come to the fore.

As an example of using the Monte Carlo approach, I simulated 1,000yield changes assuming a normal distribution. Figure 9.4 shows the histo-gram for the simulated yield changes, with the appropriate normal distribu-tion overlaid. One thing to note is that for a finite sample, the empirical

FIGURE 9.4 Distribution of Yield Changes,Parametric, and Monte Carlo ApproachesNote: The dotted line shows the assumed distribu-tion for the yields—normal with volatility 7.15bpper day; the solid line shows a normal with theMonte Carlo volatility of 7.32bp per day; the his-togram shows the Monte Carlo realization (1,000simulated yield changes, assuming yields arenormal).

Using Volatility and VaR 277

Page 297: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C09 02/15/2012 13:25:55 Page 278

distribution will never be exactly normal with the originally assumed vola-tility. For the 1,000 yield changes used in this example, the calculated vola-tility was 7.32bp per day instead of the assumed 7.15bp per day. The dottedline in Figure 9.4 shows a normal distribution with volatility 7.15bp. Thevolatility for the empirical distribution is slightly, but not dramatically, dif-ferent from what we originally assumed.

The P&L is then calculated, as for the historical approach, usuallyusing full revaluation. Figure 9.5 shows the histogram of the resulting P&L,with a normal curve overlaid.

Step 4—Extract VaR, Volatility, and so on From the distribution of P&L, wecan get the volatility, VaR, expected shortfall, or whatever is our preferredrisk measure. We should remember that although such measures, VaR, forexample, are often talked about as if they were the primary goal in risk mea-surement, the P&L distribution is really the object of interest. The VaR issimply a convenient way to summarize the distribution (and specifically thespread of the distribution). We can use the volatility, the VaR, the expectedshortfall, or some other measure (or combination of measures) to tell usabout the distribution. But it is the distribution that is the primary object ofinterest and the VaR is simply a measure or statistic that tells us somethingabout the P&L distribution.

FIGURE 9.5 Distribution of P&L, Monte CarloApproachNote: The histogram shows the P&L for aparticular realization for the Monte Carloapproach (1,000 simulated yield changes,assuming yields are normal). The dotted lineshows the P&L assuming yields are normal withvolatility 7.15bp per day; the solid line shows theP&L with the Monte Carlo volatility of 7.32bpper day.

278 QUANTITATIVE RISK MANAGEMENT

Page 298: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C09 02/15/2012 13:25:56 Page 279

When we use the parametric approach, we have an analytic form forthe distribution, almost invariably normal with some volatility. In the pre-ceding example, the distribution is:

Dist’n½P&L� ¼ N½0; ð18;300� 7:15Þ2� � N½0;130;8002�

The VaR is easy to calculate with this distribution. We simply ask what isthe level Y of P&L such that there is a probability Z of experiencing worse:

Z ¼ P½P&L � Y� ¼ P½Standard Normal Variable � ðY � mÞ=s�

where m ¼ 0¼mean of the normal distribution.s ¼ 130,800¼ standard deviation (volatility) of the normal

distribution.

We can look up the VaR off a table for the normal distribution, such asTable 8.1. We see from those that the 1%/99% VaR is 2.326 times thevolatility, which means that for our example the 1%/99% VaR is $304,200.

For the historical approach, we have a set of values rather than a para-metric functional form for the distribution. The histogram of the values isshown in Figure 9.3. For our example, there are 272 P&L values. If wewant the volatility of the distribution, we simply apply the formula for thestandard deviation:

Volatility ¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

1

n� 1

X

i¼1;n

Pi � �P� �2

s

Mean ¼ �P ¼ 1

n

X

i¼1;n

Pi

In our example, the volatility is $130,800.If we want the VaR, we need to get the quantile. We sort the observa-

tions and pick the nth from the smallest (most negative). Table 9.1 showsthe four largest increases in yields and the resulting four most-negative

TABLE 9.1 Largest Yield Changes for 1/08 to 1/09, and P&L for 10-year U.S.Treasury

Date Yield Change from Prior Day P&L ($)

9/19/2008 3.77 26.9 �486,0001/24/2008 3.64 17.1 �311,0009/30/2008 3.83 16.3 �296,00010/8/2008 3.71 16.3 �296,000

Using Volatility and VaR 279

Page 299: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C09 02/15/2012 13:25:56 Page 280

P&Ls. For 272 observations, the 1%/99% VaR will be the third observa-tion from the bottom (see the appendix to Chapter 8 for definition of thequantile). From Table 9.1, we see that this is �$296,000, so the VaR is$296,000.

For Monte Carlo, we have a set of values, just as for the historical ap-proach. In our example, we have generated 1,000 observations, but we usuallywould generate many more. The volatility and VaR are calculated just as forthe historical approach. For this particular example, the volatility is $134,000and the 1%/99% VaR is the 10th-smallest P&L, in this case �$315,100.

Table 9.2 shows the volatility and VaR for our example bond for thethree approaches. There are some similarities and differences between them.

& The volatility for the parametric and historical are the same because wefitted the parametric (normal) distribution to the historical volatility.

& The volatility for the parametric and Monte Carlo approaches are dif-ferent because the Monte Carlo is for a finite number of draws. Withonly 1,000 draws, we should not be surprised at a difference of thismagnitude.6

& The 1%/99% VaR for the parametric and historical are different be-cause the historical distribution is not, in fact, normal. In this case, theVaR is smaller, indicating that for this particular example, the lowertail of the historical distribution does not extend as far out as the nor-mal distribution.

& The VaR for the parametric and the Monte Carlo are different, but weshould expect this, since the volatilities are different.

I want to focus for a moment on the difference in the VaR estimate be-tween the parametric and historical approaches. The two distributions have

6Remember from the appendix to Chapter 8 that the variance s2 for a normal sam-ple with variance s2 is distributed such that ðn� 1Þ s2=s2 � x2n�1. This means thatfor 1,000 draws there is a 5 percent probability the sample (Monte Carlo) variancemay be 8.6 percent lower or 9.0 percent higher than the true variance (so the volatil-ity would be 4.4 percent lower or 4.4 percent higher).

TABLE 9.2 Volatility and VaR Estimated from Three Approaches

Volatility 1%/99% VaR

Parametric 130,800 304,200Historical 130,800 296,000Monte Carlo 134,000 315,100

280 QUANTITATIVE RISK MANAGEMENT

Page 300: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C09 02/15/2012 13:25:56 Page 281

the same volatility (we chose the parametric distribution to ensure that), butdifferent VaRs. The historical distribution is probably not normal, but evenif it were, there would still be random variation in the historical VaR, sinceit involves a finite number of observations. Whenever we use a limited num-ber of observations to estimate a tail measurement such as the VaR, therewill be sampling variation. Because we have so few observations in the tail,the estimate may have a fair degree of variability.

We can examine the random variability we should expect to see in theVaR using the distribution of order statistics given in the appendix to Chap-ter 8. For 272 observations the 1%/99% VaR will be the third-smallest obser-vation. Using numerical integration to evaluate the formulae in the appendix,the 95 percent confidence bands for the third-smallest observation out of 272from a normal distribution would be �2.836s to �1.938s (versus the truequantile for a continuous normal distribution of �2.326). Using the volatilityin the table, this gives 95 percent confidence bands of �$371,000 to�$253,000. These are quite wide bands on the historical VaR, reinforcingthe idea that we have to use tail measures such as VaR with care.

Us i n g Vo l a t i l i t y and VaR f or S i ng l e Bond Pos i t i o n

We have just estimated the P&L distribution and summary measures (vola-tility and VaR) for a portfolio of a U.S. Treasury bond. Now we turn tohow we might use this information. Focusing on the parametric approach,Table 8.1, reproduced here as Table 9.3, shows various combinations ofZ and Y for the normal distribution.

With these data we can say some useful things about the bond position.The first and most important is that, knowing the volatility is roughly$130,800, we should expect to see P&L of more than $130,800 aboutone day out of three, since the probability of daily P&L lower than

TABLE 9.3 Various Combinations of Probability (Z) and P&L (Y) for NormalDistribution (cf. Table 8.1)

Z Y (VaR) (Y�m)/sP[Standard Normal

Variable � (Y � m)/s]

15.9% �130,800 �1.000 0.1595% �215,100 �1.645 0.0502.5% �256,300 �1.960 0.0251% �304,200 �2.326 0.0100.39% �348,000 �2.661 0.00390.1% �404,100 �3.090 0.001

Using Volatility and VaR 281

Page 301: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C09 02/15/2012 13:25:56 Page 282

�130,800 or higher than þ130,800 are each about 16 percent. This effec-tively calibrates the P&L for normal trading conditions: it is roughly$130,800, not $13,080 and not $1,308,000. This is considerably more thanwe knew before we started.

We could extend our understanding somewhat. Assuming normality ofrisk factors, as we have here, the volatility is $130,800 and the 5%/95%VaR is $215,100. What about VaR for lower probabilities? We could useTable 9.3, but we should have less confidence in VaR for low probabilitiesusing the normal distribution. We have two simple alternatives. First, wecould apply Litterman’s rule of thumb that 4-sigma events occur aboutonce per year. This would translate into saying that the 0.39%/99.61%VaR is about $523,200 (four times $130,800).

We could alternatively assume that the risk factors are distributedaccording to a mixture of normals (1 percent chance of a high, five-timesvolatility day; that is, a ¼ 1 percent, b ¼ 5). Assuming all risk factors aresimultaneously either low or high volatility, the portfolio distribution willalso be a mixture of normals. Referring back to Table 8.4, we see that the0.39%/99.61% VaR would be $356,700 (2.727s), only slightly larger thanimplied by the simple normal distribution.

There are a few issues to emphasize at this point.

& First and most important, these numbers, as for all risk numbers,should be used respectfully. They are approximations to reality and onemust recognize their limitations. I said earlier ‘‘the volatility is roughly$130,800’’ because one never truly knows the distribution of tomor-row’s P&L. We might be confident with the order of magnitude (thevolatility is not $13,080 nor is it $1,308,000) but one should not trust$130,800 absolutely. In Chapter 8, and again further on, we discussuncertainty in the estimates. Basic common sense and market experi-ence encourage care in using such estimates.

& Second, the volatility and VaR calculated here are summary measuresbased on history and simply summarize that history in one way or an-other. This is virtually always the case. That is no bad thing since know-ing the past is the first step toward informed judgments about thefuture. It should, however, encourage humility in using the numbers,since the VaR and volatility tell us no more than what happened in thepast, albeit in a concise and useful manner.

& Third, in estimating the volatility and VaR, we have made variousassumptions. For example, with the parametric approach, we assumedthe distribution of market risk factors was normal. For the historicalapproach, we assumed that the (relatively small) number of historical ob-servations was a good estimate of the true distribution. For the Monte

282 QUANTITATIVE RISK MANAGEMENT

Page 302: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C09 02/15/2012 13:25:56 Page 283

Carlo approach, we assumed market risk factors were normal. Suchassumptions are necessary, but we must remember that our results dependon our assumptions and we should not put too much belief in our resultsunless we have absolute confidence in our assumptions—and anybodywith experience in the markets knows the assumptions are never perfect.

& Fourth, we must assess how reasonable the assumptions are for the usewe make of the results. In this case, the normality assumption for theparametric approach is reasonable for the central part of the distribu-tion. I have considerable confidence in using the volatility and makingjudgments about standard trading conditions, that is, about the centralpart of the distribution. On the other hand, I would have much less con-fidence that the 0.1%/99.9% VaR is actually $399,500 (it is probablylarger) since the tails are most likely fat and our assumption of normal-ity will not capture this well.

& Fifth, and related to the preceding, it is always easier to estimate char-acteristics of the central part of the distribution than to estimate charac-teristics of the tails. The central part of the distribution can provideconsiderable insight, and this should be exploited, but when moving tothe tails and extreme events, extra caution is necessary.

& Finally, I have focused heavily on the volatility. Volatility is appropriateand useful as a summary measure in this case because the P&L distribu-tion is more or less symmetric. In the case of nonsymmetric distribu-tions (for example, short-dated options with high gamma) volatilitywill be less appropriate.

Uncer t a i n t y i n Vo l a t i l i t y and VaR Es t ima t es

The true value for volatility and VaR are never known with certainty. Therewill be a number of sources of uncertainty:

& The usual statistical uncertainty in the estimate due to the finite numberof observations used to estimate the value of volatility or VaR.

& Erroneous assumptions about the underlying statistical model. Forexample, we usually make an assumption about the functional form ofP&L distribution but the assumption may not be correct. (The statisticaluncertainty mentioned earlier assumes the functional form is correct.)

& The world is nonstationary, with circumstances changing constantly,and so estimates based on the past may not be fully representative ofthe present, much less the future.

Let us examine the parametric volatility estimate for the U.S. Treasurybond, which is $130,800. We first consider the statistical uncertainty (the

Using Volatility and VaR 283

Page 303: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C09 02/15/2012 13:25:57 Page 284

first of the preceding sources), assuming that the P&L distribution is in factnormal. The estimate is based on 272 observations. The appendix to Chap-ter 8 gives the formula for confidence intervals for the variance, and fromthis we can calculate the statistical uncertainty in the volatility estimate.The 95 percent confidence bands (2.5 percent on either side of the estimate)are shown in Table 9.4.

Statistical uncertainty in the VaR will be the same as for the volatility(in percentage terms) since we are assuming normality and the VaR is just amultiple of the volatility.

The other sources of uncertainty are harder to evaluate. We can, how-ever, calculate what would be the error if the P&L distribution were a mix-ture of normals (with a ¼ 1%, b ¼ 5) instead of a simple normal. Table 8.4in Chapter 8 shows the VaR for a normal and mixture of normals, and theseare reproduced in Table 9.5.

For a moderate probability level (the 5%/95% VaR) there is little dif-ference between the normal and mixture. For a low probability level, say0.1%/99.9% VaR, there is a large difference, 38 percent difference between$404,100 (normal) and $752,600 (mixture). If the true distribution were amixture of normals and we assumed normal, or vice versa, we would arriveat a 0.1%/99.9% VaR quite far from reality. This is an example of the largedegree of uncertainty, particularly in the VaR for low probability levels,which can result from uncertainty in the true functional form of the P&Ldistribution.

In sum, even the best estimates of volatility and VaR are subject to un-certainty, sometimes considerable uncertainty.

TABLE 9.5 VaR Levels for Normal and Mixture of Normal Distributions (a ¼ 1%,b ¼ 5)

Normal no. SD Mixture no. SD

5.0% VaR 215,100 1.64 197,000 1.511.0% VaR 304,200 2.33 288,900 2.210.1% VaR 404,100 3.09 752,600 5.75

TABLE 9.4 Confidence Bands for Volatility and VaR Estimates

2.5% Value Estimate 97.5% Value

Volatility 119,300 (�8.81%) 130,800 141,400 (þ8.09%)5% VaR 196,100 (�8.81%) 215,100 232,500 (þ8.09%)1% VaR 277,400 (�8.81%) 304,200 328,800 (þ8.09%)

284 QUANTITATIVE RISK MANAGEMENT

Page 304: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C09 02/15/2012 13:25:57 Page 285

9 .3 SUMMARY MEASURES TOSTANDARD I Z E AND AGGREGATE

Summary risk measures are used, primarily, in two related but conceptuallydistinct ways:

1. To standardize, aggregate, and analyze risk across disparate assets (orsecurities, trades, portfolios) under standard or usual tradingconditions.

2. To measure tail risk or extreme events.

In this section, we discuss using volatility (or VaR) to standardize andaggregate. We turn to tail events in the next section.

S tandard i z e under Norma l Trad i n g Cond i t i o ns

Using volatility and VaR as tools for comparing across disparate assets un-der standard or normal trading conditions is relatively straightforward. Tounderstand this use better, consider our simple portfolio, and say that abond trader with experience in the U.S. government bond market is pro-moted to manage our hypothetical portfolio, which includes both U.S.Treasuries and French equities. From long experience in the bond market,the trader knows intuitively what the risk is of $20M in 10-year U.S. Treas-uries (or any other U.S. bond, for that matter). Were this trader to manageonly U.S. Treasuries, he would know from long experience how much par-ticular trades might make or lose during a normal trading period, howtrades would interact together in a portfolio, and have a good idea of howpositions might behave during extreme conditions. But the trader has littleexperience with equities, does not have the same depth of experience andintuition, and needs some way to compare equity positions with somethinghe knows. For example, how risky is a D7 million position in CAC futures?

The volatility (or alternatively, the VaR) is the simplest and most imme-diately informative tool for providing the manager with a comparison. Bycalculating the volatility for the equity trade, the manager can quickly gaininsight into the riskiness of the equity and calibrate the equity versus famil-iar U.S. bond trades.

For the D7M CAC futures, an estimate of the volatility is $230,800 perday. This is the parametric estimate. We arrive at it in the same way we didfor the aforementioned U.S. bond, following the four steps:

1. Asset to Risk Factor Mapping—Translate from assets held into marketrisk factors.

Using Volatility and VaR 285

Page 305: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C09 02/15/2012 13:25:57 Page 286

& Here the mapping is as beta-equivalent notional, using the CAC in-dex itself as the equity index. Since the instrument and the index arethe same, the mapping is one to one.

2. Risk Factor Distributions—Estimate the range of possibilities for themarket risk factors.& Assume that percent changes in the CAC equity index are normally

distributed and estimate the volatility from data. The estimated vola-tility is 2.536 percent per day.

3. Generate the Portfolio P&L Distribution—Generate risk factor P&Land sum to produce the overall portfolio P&L distribution.& The mapping from asset to risk factor is one to one so, given the risk

factor (the CAC index) has volatility 2.536 percent per day, the posi-tion will have volatility 2.536 percent per day. On D7 million or$9.1 million, this is $230,800.

4. VaR, Volatility, and so on—Estimate the VaR, volatility, or other de-sired characteristics of the P&L distribution.& We have the volatility, $230,800, already from step 3.

The volatility for $20 million of the U.S. 10-year bond is $130,800 perday; in other words, the equity position is substantially riskier than the U.S.bond position even though the equity notional is smaller, at D7 million or$9.1 million.

Here the volatility is used as a summary measure to allow a reasonablecomparison of the distributions of P&L. This comparison of the P&L distri-butions works even though the securities are quite different. The bond is aU.S. bond requiring an up-front investment; the equity is a euro-basedfutures, a derivative requiring no up-front investment. Still, money is moneyand we can compare the profits and losses of the two positions. Like anysummary measure, the volatility does not tell everything, but it does providea valuable comparison between these two securities.

Aggrega t i n g R i s k

We would also like to aggregate the risk across these two disparatesecurities. The volatility of the combined portfolio will not be the sumof the separate volatilities because the two securities provide some di-versification. When the bond goes down, sometimes the equity will goup and vice versa. This incorporates the idea of portfolio or diversifica-tion effects. The next chapter covers portfolio-related information thatcan be mined from the P&L distribution and the portfolio volatility,but for now we simply ask what would be the volatility of the com-bined portfolio.

286 QUANTITATIVE RISK MANAGEMENT

Page 306: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C09 02/15/2012 13:25:57 Page 287

We turn again to the four-step process for generating the P&L distribu-tion. Steps 1 and 2 are unchanged from before—we do the mapping andestimate the risk factor distributions for the bond and the equity separately.It is Step 3—generating the portfolio P&L distribution—that is now differ-ent. We first generate the distributions of yield and equity index P&L. Thisis the same as for the two assets on their own. These separate P&L distribu-tions are shown in Figure 9.6.

Now, however, we need to combine the distributions. It is very impor-tant that we are not simply adding the volatilities. We are combining thetwo distributions themselves. The easiest way to explain is to examine theP&L for the bond and equity as if we were doing historical estimation. Afew of the historical observations might be as displayed in Table 9.6. Forthe first date, the yield fell by 2.7bp, leading to a bond profit of $49,450.The CAC index fell by 1.81 percent, for a loss of $164,400. For this date,the two assets moved in opposite directions and they net off for a portfolio

Mean = 0Volatility $130,800

Mean = 0$0Volatility $230,800

A. P&L Distribution for Bond (standard deviation $130,800)

B. P&L Distribution for Equity Futures (standard deviation $230,800)

FIGURE 9.6 P&L Distribution for Bond and Equity FuturesReproduced from Figure 5.7 of A Practical Guide to RiskManagement,# 2011 by the Research Foundation of CFAInstitute.

Using Volatility and VaR 287

Page 307: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C09 02/15/2012 13:25:57 Page 288

loss of $114,900. For the second date, the bond and equity both showedprofits.

We go through each day and calculate the overall portfolio P&L fromthe constituent bond and equity P&Ls. Some dates the assets move together,some dates opposite. The net portfolio P&L is the result of all the co-movements between assets.

Figure 9.7 shows the overall portfolio distribution with the separatebond and equity distributions overlaid. The dispersion of the overall portfo-lio P&L distribution is more than either the bond or the equity but less thanthe sum of the individual dispersions. Figure 9.7 shows both the portfoliovolatility ($291,300) and the VaR ($479,200).

Calculating the portfolio P&L day by day and combining to get theportfolio P&L is conceptually simple but computationally intensive. It isexactly what we do for historical and Monte Carlo estimation, but for para-metric estimation, the bond and CAC distributions are normal, and normaldistributions can be combined more easily. In fact, the sum of normals is

TABLE 9.6 Sample Observations for Bond and Equity Risk Factors and PortfolioP&L

Date Yield Change

Bond P&L

($) CAC % Ch

Equity P&L

($)

Port. P&L

($)

Jan 4, 2008 3.87 �2.7 49,450 5,447 �1.81 �164,400 �114,900Jan 7, 2008 3.83 �3.4 62,290 5,453 0.11 10,090 72,380Jan 8, 2008 3.78 �4.9 89,840 5,496 0.78 71,210 161,100Jan 9, 2008 3.82 3.9 �71,200 5,435 �1.1 �100,300 �171,500

FIGURE 9.7 P&L Distribution for Portfolio ofBond and Equity Futures

288 QUANTITATIVE RISK MANAGEMENT

Page 308: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C09 02/15/2012 13:25:58 Page 289

normal with volatility that combines according to the rule:

VolðPortfolio Aþ Portfolio BÞ ¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

VolðAÞ2 þ 2� r� VolðAÞ � VolðBÞ þ VolðBÞ2q

In this case,

Volð$20MUSTþ D7MCACÞ¼

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

130;7502 þ 2� 0:24� 130;750� 230;825þ 230;8252q

� 291;300

Table 9.7 summarizes the volatility for the individual assets and theportfolio assuming that the distributions are all normal (using parametricestimation). The portfolio volatility of $291,300 is more than either thebond or equity, but less than the sum of the stand-alone volatilities.

There are a few points we need to emphasize regarding how we areusing volatility here:

& The hypothetical manager is using VaR or volatility to compare onetrade versus another and analyze the effect of aggregating trades underusual or normal trading conditions. As a result, it makes sense to focuson the central part of the distribution and to use volatility.

& The comparison or calibration of the equity trade versus U.S. trade is a use-ful guide but not the final word. Apart from other considerations, the vola-tility estimates are based on history, and particular circumstances maymake the history more or less representative in one market versus the other.

& The comparison is focused primarily on normal trading conditions. Tomeasure extreme events, the manager should consider additional infor-mation or alternative approaches. For example, the manager might

TABLE 9.7 Volatility for Government Bond and CAC Equity Index Futures

Stand-AloneVolatility

Actual PortfolioVolatility

Sum of Stand-AloneVolatility

UST 10-year bond $130,800CAC equity $230,800UST þ CAC $291,300 $361,600

Based on Table 5.2 from A Practical Guide to Risk Management, # 2011 by theResearch Foundation of CFA Institute.

Using Volatility and VaR 289

Page 309: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C09 02/15/2012 13:25:58 Page 290

want to extrapolate from knowledge of and experience in the U.S. mar-ket, or rely on other more detailed analysis of the French equity market.

The idea of using summary risk measures as tools for comparison acrossdisparate trades is straightforward but quite powerful. The example of com-paring a single 10-year U.S. trade versus a single equity trade is simple but itcaptures the essence of the approach. Realistic portfolios will consist ofmany trades. Such complexity would make it difficult for even an experi-enced manager to grasp the portfolio risk based on intuition alone. Usingvolatility or VaR to compare trades is particularly compelling in this exam-ple because the products are so different—different asset class, different cur-rency, one a cash bond and the other a derivative. Most managers will nothave the intimate familiarity with such a variety of products that they candispense with these quantitative tools. When introducing new products ornew sectors, with risk where the manager has little familiarity, using toolssuch as volatility or VaR for comparison becomes even more valuable.

9 .4 TA I L R I SK OR EXTREME EVENTS

The second important use of summary risk measures is in evaluating tailrisk or extreme events. VaR and expected shortfall are specifically intendedto capture the tail of the P&L distribution.

We might use the 1%/99% VaR to get an idea of what a large P&Lmight be. The 1%/99% VaR for the U.S. bond is �$304,200, which meanswe have roughly a 1 percent chance of seeing a loss worse than $304,200. Ina period of 100 trading days, we should expect to see a loss worse than$304,200. This is not a worst case, merely a regularly occurring nasty eventwith which one should be comfortable.

Most usefully, the 1%/99% VaR gives an order of magnitude to theP&L. One should be very surprised to see a loss worse than $3,042,000 (10times the VaR estimate), and equally surprised if there were no losses worsethan $30,420 during a period of 100 days.

But we should not rely on the figure $304,200 absolutely—there aremany sources of uncertainty and error in the estimate of $304,200. Wemust use the VaR with caution. In particular, the further we move out inthe tail, the more difficult it is to estimate anything with confidence. Weusually have two alternatives, both of which give imprecise estimates, al-though for somewhat different reasons:

1. Use all observations to estimate the P&L distribution. We will have alarge number of observations, lowering the statistical error.

290 QUANTITATIVE RISK MANAGEMENT

Page 310: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C09 02/15/2012 13:25:58 Page 291

Unfortunately, the estimated distribution may conform to the centralpart of the distribution (with the bulk of the observations) at the cost ofpoor fit in the tails.

2. Use tail observations to fit the tail. But then, we have only a handful ofobservations, and the statistical error will be high.

For parametric estimation, we have taken the first course and assumedthat the distribution is normal. The tails do fit less well than the central partof the distribution. We could instead assume that the distribution was aStudent-t rather than normal. This would give a 1%/99% VaR equal to�$335,600 instead of �$304,200.

Alternatively, we could take the second course and use tail observa-tions. With only 272 observations, however, the bottom 1 percent consistof only two or three observations—hardly enough to say anything withconfidence.

It is often difficult to measure the tails with confidence except by incor-porating external information. Such information might be the general shapeof tails taken on by financial returns, inferred from past studies or othermarkets. Such issues are discussed in somewhat more depth next.

S imp l e Parame tr i c Assump t i o ns—Studen t - tand M i x t ure o f Norma l s

The first and simplest approach, discussed in Section 8.4, is to replace theassumption of normality in estimating parametric or Monte Carlo VaRwith the assumption of a Student-t or a mixture of normals. Both a Studentt distribution and a mixture of normals have fat tails relative to the normalbut is still relatively simple.

The three distributions (normal, t distribution, and mixture of normals)are characterized by the parameters shown in Table 9.8 (assuming meanzero for each):

A simple approach for fitting the t distribution and mixture distribu-tions is as follows:

& Choose a value for the nonscale parameters.& For the t distribution degrees-of-freedom values of 3 to 6 seem to be

reasonable (cf. Jorion, 2007, 130).& For a two-point mixture of normals, a high-volatility probability (a)

of around 1 percent/5 percent and high-volatility magnitude (b) ofaround 3 to 5 seem reasonable.

& Conditional on these parameters, calculate the scale parameter byequating the sample (observed) variance and the distribution variance.

Using Volatility and VaR 291

Page 311: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C09 02/15/2012 13:25:58 Page 292

Remember, as discussed in Chapter 8, that the t distribution does notadd as the normal distribution does: the sum of two t distributions is not tdistributed. This makes the t distribution less useful in a portfolio context.For the normal distribution, we can assume that the individual risk factorsare normally distributed and the portfolio P&L (the sum of individual assetP&Ls) will also be normal. We can calculate the portfolio variance from theindividual risk factor variance-covariance matrix by a straightforwardmatrix multiplication. This mathematical simplicity will carry over to amixture of normals but not to the Student t distribution.

Single Asset For the U.S. Treasury bond considered earlier, the observedstandard deviation was $130,800. Assuming the t distribution degrees offreedom is n ¼ 6 and that the mixture high-volatility probability and magni-tude are a ¼ 0.01 and b ¼ 5, this gives values for the parameters for a nor-mal, t distribution, and mixture of normals as shown in Table 9.9.

TABLE 9.8 Parameters for Normal, Student tDistribution, and Mixture of Normals

Normal t Distribution Mixture of Normals

DistributionParameters

s – scale(standarddeviation)

ct – scalen – shape ordegrees offreedom

smix – scale (standarddeviation of low-volregime)

a – high-vol regimeprobability

b – high-vol regimemultiplier

ObservedDistributionVariance

s2obs ¼ s2 s2

obs ¼ c2t n=ðn� 2Þ s2obs ¼ s2

mix½ð1� aÞ þ ab2�

Value at Risk(VaR)

Y s.t. P[StandardNormal �((Y � m)/s)]

Y s.t. P[standardt-variate �((Y� m)/ct)]¼ Z

Y s.t. (1 � a)�P[StandardNormal� ((Y� m)/smix) ]þ a�P[SN� ((Y� m)/smix)/b ]¼ Z

TABLE 9.9 Values for Parameters for Normal, t Distribution, and Mixture ofNormals

Normal Student t Mixture of Normals

AssumedParameters

m ¼ 0 m ¼ 0, n ¼ 6 m ¼ 0a ¼ 0.01, b ¼ 5

DistributionVariance

s2 ¼ 130,8002 )s ¼ 130,800

c2t � 6=ð6� 2Þ ¼130;8002 )ct ¼ 106;798

s2mix½ð1� aÞ þ ab2� ¼130;8002 ) smix ¼117;462

292 QUANTITATIVE RISK MANAGEMENT

Page 312: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C09 02/15/2012 13:25:59 Page 293

These parameters produce the densities shown in Figure 9.8. The densi-ties do not look dramatically different and they are not very different in thecentral part of the distribution. In the tails, however, the t distribution andthe mixture of normals diverge substantially from the normal.

Table 9.10 (reproducing some of Table 8.4) shows the levels VaR (indollar terms and as multiples of the volatility or standard deviation). Thetails of the Student t and the mixture of normals are fat relative to the nor-mal. The difference is not substantial at the 5 percent or 1 percent level, butfor the 0.1%/99.9% VaR, the Student t value of $556,100 is 1.4 timeslarger than the normal, while for the mixture of normals, it is 1.9 timeslarger. This compares reasonably with Litterman’s rule of thumb, which isthat for a probability of 0.39 percent, the actual VaR is 1.5 times larger thanpredicted by normality.

FIGURE 9.8 P&L Distribution for Bond andEquity FuturesNote: The t-distribution has degrees of freedom n ¼6 and the mixture of normals has a ¼ 1%, b ¼ 5.

TABLE 9.10 VaR for Normal, Student t, and Mixture of Normals

Z Ynorm no. SD Yt no. SD Ymix no. SD

5.0% �215,100 1.64 �207,500 1.59 �196,900 1.511.0% �304,200 2.33 �335,600 2.57 �288,800 2.210.1% �404,100 3.09 �556,100 4.25 �753,300 5.76

Note: The volatility (standard deviation) is $130,800. The Student-t distribution has6 degrees of freedom, and the mixture of normals has a ¼ 1%, ¼ 5.

Using Volatility and VaR 293

Page 313: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C09 02/15/2012 13:25:59 Page 294

Multiple Assets The Student t distribution does not work well with multi-ple assets because the sum of t variates will generally not be t distributed.We cannot, as a result, apply the simple parametric approach using the tdistribution, although we could still consider using it for Monte Carlo.

The mixture of normals, however, does work well with multiple assetsand the parametric approach, since jointly normal variates sum to a normalvariate. We can demonstrate this by using our example of the U.S. Treasurybond and the CAC futures. We must assume that the low volatility and highvolatility regimes occur simultaneously for all assets, that is, both the U.S.Treasury and the CAC index are in the low volatility regime, or they areboth in the high volatility regime. This is a reasonable assumption, sinceextreme events and crises in the financial markets tend to affect all marketstogether, not each market separately and independently.

The assumption that all assets are in the same regime means that, condi-tional on the regime, the P&L distributions are jointly normal. This makesthe mathematics simple since the sum of normals itself is normal. For theexample of the U.S. Treasury and the CAC, we assume that each is a mix-ture of normals, with a ¼ 0.01 and b ¼ 5. But there is an important pointhere. The assumption that all assets are in the same regime means that a isthe same for all assets. But there is no necessity that b, the ratio of the high-to-low volatility regime, be the same for all assets. Furthermore, the correla-tion between assets need not be the same in the low and high volatilityregimes. In a multivariate context, this means that we can allow thevariance-covariance matrix to differ between the low and the high volatilityregimes. What is important is that assets are jointly normal in each regimewith some given variance-covariance matrix, and that all assets are in eitherthe low or the high volatility regime simultaneously.

Table 9.11 shows how the distributions combine for our simple two-asset portfolio. Across a row (that is, in a particular regime) the P&L formultiple assets combine as a multivariate normal. In other words, the com-bined P&L will be normal with volatilities (standard deviations) combiningaccording to the standard rule, as, for example, in the low volatility regime:

slo ¼ p½s2mixT;l þ 2rlsmixT;lsmixG;l þ s2

mixG;l�

Down a single column, the P&L distribution for an asset (or the overall portfo-lio) is a two-point mixture of normals.

The individual asset distributions will be mixture of normals, and willhave fat tails relative to a normal distribution. This carries over to the over-all portfolio, which is also a two-point mixture of normals. When the corre-lation is the same across regimes and the ratio of high-to-low volatilityregimes is the same for all assets (bT ¼ bC ¼ b), then the portfolio P&L

294 QUANTITATIVE RISK MANAGEMENT

Page 314: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C09 02/15/2012 13:26:0 Page 295

mixture distribution has the same parameters as the individual asset distri-butions (specifically the same b). In the more general case, however, the ra-tio of high-to-low volatility for the portfolio (shi/slo) will be somecomplicated function. Nonetheless, the calculation of this ratio and thus theP&L distribution is straightforward. It will require two separate portfoliocalculations by the parametric approach, separately for the low and thehigh volatility regimes, but each of these is computationally simple.

Table 9.12 shows that the 0.1%/99.9% VaR is substantially higherthan for the normal distribution with the same volatility.

The two-point mixture of normals is simple but it does capture fat tails,the most important feature that is missed when assuming normality. Since itis based on normality, it is suitable for extending the parametric estimationmethodology to capture some aspects of fat tails. We should not underrate

TABLE 9.11 Details for Portfolio Volatility Calculation Assuming Mixture ofNormals

US T-Bond CAC Futures Correl Portfolio

Low-vol

regime

smixT,l ¼ 117,462 smixC,l ¼ 207,265 rl ¼ 0.24 slo ¼ ffiffi½p

s2mixT;l þ

2rlsmixT;lsmixC;l þ s2mixC;l�

High-vol

regime

smixT,h ¼bT�smixT,l ¼5117,462

smixC,h ¼bC�smixC,l ¼5207,265

rh ¼ 0.24 shi ¼ffiffi½p

s2mixT;h þ

2rhsmixT;hsmixC;h þ s2mixC;h�

Overall smixT;l

ffiffi½p ð1� aÞþab2

T �smixC;l

ffiffi½p ð1� aÞþab2

C�ffiffi½p ð1� aÞs2

lo þ as2hi�

Note: This table lays out the calculations for a two-asset portfolio in which the cor-relation may differ across low and high volatility regimes, and in which assets mayhave different ratios of high-to-low volatility. For our particular example, we as-sume the same correlations and ratios: correlation r ¼ 0.24, ratio b ¼ 5.

TABLE 9.12 Results for Portfolio Volatility Assuming Mixture of Normals

UST 10-yearBond

CACFutures Portfolio

Sum of Stand-Alone

Low-vol regime (smix) $117,400 $207,300 $261,600 $324,700High-vol regime (b�smix) $587,100 $1,036,000 $1,308,000 $1,624,000Asset Volatility $130,800 $230,800 $291,300 $361,6000.1% VaR—Normal $404,100 $713,300 $900,200 $1,117,0000.1% VaR—Mixture $752,400 $1,328,000 $1,676,000 $2,081,000

Note: This table assumes that a (probability of high-volatility regime) is 1 percentand b (the ratio of high-to-low volatility) is 5.

Using Volatility and VaR 295

Page 315: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C09 02/15/2012 13:26:0 Page 296

the value of an approach that can build on the simplicity and computationalspeed provided by the parametric approach while also modeling fat tails.

As pointed out earlier, we can allow in the multivariate context for adependence structure that differs between the high and low volatility regimes.For a standard multivariate normal, the dependence does not vary with thesize of the P&L, but experience indicates that correlations increase underextreme conditions: the rule of thumb is that in a crisis, correlations move toone (plus or minus, against the portfolio). The correlation in the high volatil-ity regime can be chosen closer to one (larger in absolute value). This willproduce greater dependence (higher correlation) in the tails than in the cen-tral part of the distribution, but do so in a computationally simple manner.

E x t reme Va l ue Theory—F i t t i n g Ta i l s

We discussed extreme value theory (EVT) in Section 8.4. With EVT, ratherthan choosing a distribution with appropriate tail behavior and then fittingthe entire distribution, we fit only the tails of the distribution, the maximum(or minimum) values. The generalized extreme value (GEV) distributionprovides a limiting distribution for maximums and minimums. Whateverthe form of the P&L distribution (under mild regularity conditions and suit-ably normalized), the distribution of the maximums converges asymptoti-cally to the GEV.

In practice, it is generally better to use the generalized pareto distribu-tion (GPD) and threshold exceedances rather than the maximums. Thresh-old exceedances are values that exceed some chosen high threshold.Remember from Section 8.4 that the excess distribution function gives theprobability conditional on the random variable (the P&L) exceeding somespecified level u.

Let X be the variable representing the random P&L. We focus on theexceedance X � u, the amount by which X exceeds the level u, and on thesize of the exceedance y (which will be non-negative). The probability thatthe exceedance X � u is less than an amount y (conditional on X exceedingu) is the excess distribution:

excess distribution: FuðyÞ ¼ P½X� u � yjX > u�¼ ½Fðyþ uÞ � FðuÞ�=½1� FðuÞ�

The GPD is given by:

Gj;bðyÞ ¼ 1� ð1þ jy=bÞ�1=j j 6¼ 0

¼ 1� expð�y=bÞ j ¼ 0

where b > 0 and y � 0 for j � 0 and 0 � y � � b/j for j < 0

296 QUANTITATIVE RISK MANAGEMENT

Page 316: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C09 02/15/2012 13:26:1 Page 297

The GPD is useful for modeling the excess distribution function becausethe excess distribution is simple for the GPD: the excess distribution forGPD is also GPD:

excess distribution ðover uÞ forGj;bðyÞ ¼ Gj;bðuÞðyÞ bðuÞ ¼ bþ ju

We assume that if we choose a high but finite threshold u, the observedexcess distribution function will actually be GPD.

We can illustrate the process using the U.S. Treasury bond we havebeen considering. We have 272 observations and we will choose the exceed-ance level u to be 18bp (aproximately –$329,256) a level that includesfive observations, or less than 2 percent of the observations.7 Maximumlikelihood estimation is straightforward (assuming independence). Thelog-likelihood function is simple:

lnLðj;bÞ ¼ �Nlnb� ð1þ 1=jÞP lnð1þ j � Yj=bÞYj ¼ Xj � u ¼ excess loss ðchanging sign; so treating the loss as a positiveÞ

The five largest-yield changes are shown in Table 9.13, together withthe contribution to the likelihood (at the optimum).

Maximizing the likelihood function gives, approximately,

b ¼ 4:0 j ¼ 0:14

TABLE 9.13 Five Lowest Observations for Fitting GPD Parameters(Using Yield Changes)

Date Yield

Change

from priorday

Obs.no.

Approx.P&L

LogLikelihood

ExcessDistribution

03/24/08 3.52 19 5 �347,548 �1.6664 0.21809/30/08 3.83 20 4 �365,840 �1.9372 0.38310/08/08 3.71 20 3 �365,840 �1.9372 0.38301/24/08 3.64 21 2 �384,132 �2.1993 0.51009/19/08 3.77 33 1 �603,636 �4.8225 0.951

Note: Uses yield changes, and parameters b ¼ 4.0, j ¼ 0.14.

7 This is just for illustrative purposes. In practice, one would need to use a muchlarger sample size. See McNeil, Frey, and Embrechts (2005, 278 ff) for discussion ofestimation and the likelihood function.

Using Volatility and VaR 297

Page 317: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C09 02/15/2012 13:26:1 Page 298

so that the excess loss distribution, the distribution of yield changes condi-tional on the change being larger than 18bp, is:

FuðyÞ ¼ Gj;bðyÞ ¼ 1� ð1þ jy=bÞ�1=j ¼ G0:14;4ðyÞ¼ 1� ð1þ 0:14ðyÞ=4Þ�1=0:14

¼ 1� ð1þ 0:14ðx� 18Þ=4Þ�1=0:14

(Remember that y is the size by which the yield change exceeds the threshold,18bp, and x is the size of the yield change. Table 9.13 shows the variable x.)

As the value of the shape parameter, j, tends to zero, the tails becomeless fat (zero corresponds to exponential as for a normal distribution), whilefor larger j, the tails become increasingly fat: for j ¼ 0.5, the variance is notfinite, and for j ¼ 1, the mean is not finite. The value of j ¼ 0.14 estimatedhere should not be taken seriously given the insignificant number of obser-vations. It is somewhat low relative to many studies of financial returns,which find values in the range of 0.2 to 0.4 for stock market data (see, forexample, McNeil, Frey, and Embrecht 2005, 280 ff; Jorion 2007, 130).

Figure 9.9 shows the estimated GPD excess loss distribution, togetherwith that for a normal, mixture of normals (a ¼ 1%, b ¼ 5), and Student t(degrees-of-freedom 6). The normal tails off very fast (meaning low proba-bility of large losses) while the other distributions show larger probability oflarge losses.

Yield changes

GPD

Student t

Mixture

Normal

Empirical

20 30 40

0.5

1

FIGURE 9.9 Excess Loss Distribution for Fitted GPDand Other Distributions

298 QUANTITATIVE RISK MANAGEMENT

Page 318: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C09 02/15/2012 13:26:1 Page 299

Using the definition of the excess distribution and the assumption thatthe excess loss distribution is GPD (and noting that here the variable x mea-sures loss level, while the preceding y measures the exceedance), one canshow that for any level of loss beyond u:8

1� FðxÞ ¼ 1� FðuÞ½ � 1� Fuðx� uÞ½ � ¼ 1� FðuÞð Þ 1þ jx� u

b

� ��1=j

The VaR (or any other risk measures) cannot be calculated from thefitted excess loss distribution alone, since the probability F(u) is required.The fitted GPD distribution can be combined with the empirical estimatorof the threshold probability:

1� FðuÞ ¼ number of observations above u=total number of observations:

Calculating the VaR for the chosen level u itself provides no benefit, asit just reproduces the empirical quantile. The benefit of the GPD or tailexceedance approach is in extrapolation to more extreme tail probabilitiesbased on the fitted GPD; the fitted GPD should be a better (and smoother)fit to the tail data than the empirical distribution.

For VaR levels beyond u (1 � Z � F(u)):9

VaRz ¼ Z-quantile ¼ uþ ðb=jÞf½Z=ð1� FðuÞÞ��j � 1gand the expected shortfall (assuming j < 1) is:

ESz ¼ VaRz=ð1� jÞ þ ðb� juÞ=ð1� jÞTables 9.14 and 9.15 show the estimated VaR and expected shortfall for

the four functional forms (normal, GPD, Student t with 6 degrees of freedom,and mixture of normals with a ¼ 1% and b ¼ 5) for more extreme levels ofZ: the 1%/99% VaR and the 0.1%/99.9% VaR (see Section 8.4).

As expected, for the more extreme tail probability (0.1%/99.9%) thethinner tail of the normal relative to the other distributions generates lowerVaR and expected shortfall.

Copu l as—Non -Norma l Mu l t i v ar i a t e D i s t r i bu t i ons

We have used the two-point mixture as a simple model for a non-normalmultivariate distribution. Copulas provide the mathematical structure to dothis more generally, although we will see that the two-point mixture is still

8 See McNeil, Frey, and Embrechts (2005, 283) for derivation of this and the follow-ing formulae.9Note that my Z ¼ 1 percent (to denote the 1%/99% VaR) corresponds toa ¼ 99 percent for McNeil, Frey, and Embrechts.

Using Volatility and VaR 299

Page 319: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C09 02/15/2012 13:26:1 Page 300

quite useful. Copulas are most suitable for Monte Carlo applications sincethe distributions generated with copulas are often easy to simulate but haveno simple closed-form or analytic expressions.

The essence of the copula approach is that it allows one to specify themarginal distributions and the dependence structure (the copula) separately,then combine the two to produce a multivariate distribution. Alternate mar-ginals and copulas can be mixed and matched to produce both standarddistributions (for example, multivariate normal) and hybrid or whatMcNeil, Frey, and Embrechts (2005, 192) callmeta distributions.

Here we will use the copula approach and Monte Carlo to calculate theVaR for our example portfolio of $20 million of 10-year U.S. Treasurybond and D7 million of the CAC index futures. We will make five distribu-tional assumptions for the risk factors (yields and equity index):

1. Bivariate Normal—Normal marginals and normal copula—neither fattails nor high probability of joint extreme events.

2. Hybrid Student/Normal—Student t distributed marginals (3 degrees offreedom) and normal copula—produces fat tails but normal copulameans variables start to behave independently for joint extreme events.

TABLE 9.15 VaR and Expected Shortfall (Dollars) for Alternative FunctionalForms

1%/99% Level 0.1%/99.9% Level

VaR Exp Short VaR Exp Short

Normal 365,576 419,866 487,958 532,314GPD 375,755 468,403 592,242 720,132Student t 403,967 520,371 674,087 832,127Mixture of Normals 346,696 834,955 915,682 1,638,737

TABLE 9.14 VaR and Expected Shortfall (Yield, in bp) for Alternative FunctionalForms

1%/99% Level 0.1%/99.9% Level

VaR Exp Short VaR Exp Short

Normal 20.0 23.0 26.7 29.1GPD 20.5 25.6 32.4 39.4Student t 22.1 28.4 36.9 45.5Mixture of Normals 19.0 45.6 50.1 89.6

300 QUANTITATIVE RISK MANAGEMENT

Page 320: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C09 02/15/2012 13:26:1 Page 301

3. Usual Bivariate Student t—Student t distributed marginals and usualStudent t copula (again 3 degrees of freedom)—produces fatter tailsand Student t distribution copula produces increased probability ofjoint extreme events.

4. Alternate (Product) Bivariate Student t—Student t distributed margin-als and a Student t copula that reduces to a product distribution (inde-pendence) for correlation zero. This distribution also behavesindependently for joint extreme events.

5. Two-Point Mixture of Normals (a ¼ 2%,b ¼ 4)—Looks similar to thebivariate Student t distribution with many joint extreme observations.

The difference between the usual bivariate t distribution and the prod-uct bivariate Student t requires comment and is elaborated upon shortly.

First, however, it is worth taking a moment to review exactly how andwhy a multivariate distribution is used in calculating the portfolio P&L dis-tribution. The portfolio distribution is usually of primary interest and theindividual asset distributions only intermediate steps. The only way toobtain the portfolio P&L distribution, however, is to build up from theindividual assets (as discussed in Sections 8.3 and 9.2, and outlined inFigure 8.7). The four steps that produce the P&L distribution are:

1. Asset to Risk Factor Mapping—Calculate transformation from individ-ual assets to risk factors

2. Risk Factor Distributions—Estimate the range of possible levels andchanges in market risk factors

3. Generate P&L Distribution—Generate risk factor P&L and sum to pro-duce the portfolio P&L distribution

4. Calculate Risk Measures—Estimate the VaR, volatility, or other desiredcharacteristics of the P&L distribution

Joint normality is popular because if the mapping in Step 1 is linear andthe risk factor distributions in Step 2 are multivariate normal, then the sum-mation in Step 3 can be done analytically—it requires only a matrix multi-plication to calculate the portfolio volatility. In such a case, the individualrisk factor P&Ls will be normal, the sum of normals is normal, and theresulting portfolio P&L will be normal. This reduces mathematical andcomputational complexity enormously.

If the risk factors are non-normal or the P&L functions are far fromlinear, the summation in Step 3 becomes laborious and the P&L distribu-tion will not be a simple form. In such cases the overall portfolio distribu-tion has to be estimated using Monte Carlo, a long and computationallyintensive process.

Using Volatility and VaR 301

Page 321: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C09 02/15/2012 13:26:2 Page 302

When the distribution is not jointly normal (for the hybrid Student/normal, the usual bivariate Student t, and the alternate Student t) theportfolio distribution will not be Student t or normal even if we assumethe transformation in Step 1 is linear, since the sum of risk factor P&Lsor convolution of the distributions will give something new and analyti-cally intractable.10 Monte Carlo will be the only feasible computationalapproach.

I briefly review the computational approach for simulating the metadistributions we use here, although for a more detailed discussion, thereader should go to McNeil, Frey, and Embrechts (2005, 193, 66, and 76).For the simulation of Student t marginal combined with normal copula, thetwo-step process for each draw of our bivariate random variable is:

Step 1. Generate a normal copula:& For each draw, generate a standardized bivariate normal with

mean zero, unit variances, and the desired correlation matrix:

Y ¼ ðY1;Y2Þ0 � N2ð0;RÞ& Calculate

U ¼ ðU1;U2Þ0 ¼ ðFðY1Þ;FðY2ÞÞ

where F(�) is the normal CDF.& This will be a normal copula. More precisely, the random vector

U will have a normal copula distribution with correlationmatrix R.

& The numbers Ui will be in the interval [0,1] and will look likeprobabilities.

Step 2. Generate the joint distribution with the desired marginals:& Calculate

X ¼ ðt�1n ðU1Þ; t�1

n ðU2ÞÞ0

where t�1n (�) is the univariate Student t distribution inverse CDF

or quantile function.& This will now have a marginal Student t distribution with nor-

mal dependence structure.

10When assuming the risk factor distributions are two-point mixtures of normals,the portfolio P&L will also be a two-point mixture and can be calculated analyti-cally, as discussed earlier.

302 QUANTITATIVE RISK MANAGEMENT

Page 322: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C09 02/15/2012 13:26:2 Page 303

The process for other copulas and marginals should be obvious. Forexample, a Student t copula would use the t distribution CDF instead of thenormal CDF in Step 1, and normal marginals would use the normal inverseCDF in Step 2.

To demonstrate the use of copulas in Monte Carlo estimation of theportfolio distribution, we will consider a portfolio of $20 million of the10-year U.S. Treasury bond and D4 million of the CAC equity index futures.11

Figure 9.10 shows the bivariate distribution for our five risk factor dis-tributions. Each point represents one realization (5,000 in total) for thejoint bond P&L (on the horizontal) and the equity futures P&L (on the ver-tical). The dashed lines are 3-sigma bars. The volatility of the bond P&L is$130,800, and the equity P&L is $131,900.

Some of the highlights that we should note include:

& The normal-normal (jointly normal or normal marginals and normalcopula) shows virtually no extreme observations, outside 3-sigma. Thisis as we would expect for a normal distribution since a normal has fewextreme values.

& The Student-normal (Student t marginals 3 degrees of freedom, normalcopula) shows observations outside the 3-sigma bars, as we wouldexpect, given the Student t distributed marginals. With the normal cop-ula, however, there are virtually no joint extreme observations. Withthe normal dependence structure inherited from the normal copula,there are many individually extreme events but virtually no jointlyextreme events. This argues against a normal copula to model financialmarkets, in spite of the familiarity we have with the simple dependencestructure (linear correlation) inherent in the normal copula.

& The Student-Student (usual bivariate Student t distribution, 3 degrees offreedom, with Student t marginals and Student t copula) shows manyjoint extreme events. This matches what we seem to see in markets,with simultaneous large moves in both assets.

& The alternate Student (discussed more further on), like the precedingStudent-normal, shows virtually no joint extreme events.

& The mixture (two-point mixture of normals with a ¼ 2%, b ¼ 4) showsmany joint extreme events, and in this respect looks much more like thebivariate Student t than any of the other distributions.

11This is slightly modified from the portfolio we have been using to better highlightthe differences across risk factor distributions. The amount of the CAC futures is lessto make the volatilities of the bond and futures positions more equal, and I assumethat the correlation between yields and the equity index is 0.5 instead of 0.24 as inall other examples considered in this book.

Using Volatility and VaR 303

Page 323: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C09 02/15/2012 13:26:2 Page 304

Normal-Normal Student-Normal

Student-Student Alternate Student

Mixture

FIGURE 9.10 Monte Carlo Results for Two-Asset Portfolio, Alternate MetaDistributions through Copulas

304 QUANTITATIVE RISK MANAGEMENT

Page 324: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C09 02/15/2012 13:26:2 Page 305

We now turn to a short digression on the Student t distribution. For thet distribution there is not a unique multivariate form, unlike the joint nor-mal distribution. Shaw and Lee (2007) discuss the issue in some detail. Theeasiest way to see this, and to understand the tail dependence embedded inthe usual t distribution, is to examine the construction of a multivariate Stu-dent t distribution by Monte Carlo (see McNeil, Frey, and Embrechts 2005,section 3.2, particularly 75 and 76, as well as Shaw and Lee 2007, 8 ff). Theusual multivariate Student t (all marginals the same degrees of freedom, n) isa normal mixture distribution, generated as:

t¼d A � Yffiffiffiffiffi

n

x2n

r

Y �Nk(0, Ik) is a multivariate standardized normal random variable.

x2n is a (univariate) chi-squared random variable with n degrees offreedom.

This construction shows immediately why a Student t variable has fattails relative to a normal: small draws for the chi-squared variable blow upthe normal variables. This produces a small chance of values that are largerelative to the normal distribution. As the degrees of freedom gets large, then in the numerator offsets the potential small values of the chi-squared vari-able and the t tends to the normal. This construction also shows why thismultivariate t has joint extreme values. Each and every normal is divided bythe same chi-squared variable. A small draw for the x2 will inflate all instan-ces of the normal at the same time, producing joint extreme values.

As Shaw and Lee point out, however, this is not the only possible con-struction for a multivariate Student t distribution. Instead of using a singlex2 applied to all the dimensions of the multivariate normal, we could applya separate x2 to each dimension. In fact, this is a convenient way to producea multivariate Student t with marginals having different degrees of freedom.Such a construction also has the benefit that zero correlation will produceindependence. For the usual multivariate t distribution, zero correlationdoes not imply independence. Again, the preceding construction showswhy. For the usual multivariate Student t distribution (with a single x2), thetails will be correlated because the single x2 applies to all dimensions. As wemove further out in the tails, this effect dominates and even zero correlationvariables become dependent.

The ‘‘Alternate Student’’ shown in Figure 9.10 is constructed using aseparate x2 for each of the yield and equity index variables. As we see inFigure 9.10, however, such a construction is probably not useful for a jointmarket risk factor distribution. It does produce extreme events but virtually

Using Volatility and VaR 305

Page 325: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C09 02/15/2012 13:26:2 Page 306

no joint extreme events, no events that produce large changes for bothyields and equity indexes. The dependence behavior in the tails seems to besimilar to that for the normal copula. We return to this alternate Student tdistribution again in Chapter 11. In the application there, however, we willsee that the independence of tail events embedded in the alternative Studentt is more appropriate than the usual Student t.

Returning to Figure 9.10 and the exercise of comparing copulas, we candraw three conclusions. First, joint normality is problematic for modeling tailbehavior. This goes beyond the well-known result that normal marginals havethin tails. There is a more subtle effect related to the dependence structureembedded in joint normality. The scatter plot for the Student-normal distribu-tion highlights this—joint extreme events are exceedingly rare. Second and re-lated, we need to think carefully about joint dependence in the tails. Althoughthis is not easy, copulas can provide some tools to aid in this. Third, the two-point mixture of normals seems to work well as a computationally simple ap-proximation. The Student t distribution, although popular, does not have thebenefit of simple analytic tools available for the normal or mixture of normals.

For completeness, Table 9.16 shows the volatility and VaR for the bondand the overall portfolio (the equity is not displayed in the interests of clarity).

9 . 5 CONCLUS ION

Chapter 9 has demonstrated the use of volatility and VaR for measuringrisk, using our simple portfolio of a U.S. Treasury bond and CAC equityindex futures. Volatility and VaR are exceedingly valuable for measuring

TABLE 9.16 Volatility and 0.1%/99.9% VaR for Bond and Overall Portfolio,Alternative Joint Distributions

Bond Vol Bond VaR Port Vol Port VaR

Normal-Normal 132,300 406,900 229,600 705,200Student-Normal 127,400 790,400 221,800 1,430,000Student-Student 125,800 619,600 222,500 1,139,000Alternate Student 125,800 619,600 212,400 1,193,000Normal Mixture 132,500 954,200 232,300 1,763,000

Note: This is the result from Monte Carlo with 5,000 draws using the joint distribu-tions as noted. The Student distributions all have 3 degrees of freedom. The normalmixture has a ¼ 2%, b ¼ 4. The assumed volatility for yields was 7.15bp per day,giving a bond volatility of $130,800; for the equity index the volatility was assumedto 2.54 percent per day, giving an equity volatility of $131,900; the correlation be-tween yields and the equity index was assumed to be 0.5.

306 QUANTITATIVE RISK MANAGEMENT

Page 326: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C09 02/15/2012 13:26:3 Page 307

risk but they do not tell us very much about the sources of risk or the struc-ture of the portfolio.

We now turn, in Chapter 10, to the portfolio tools that help untanglethe sources of risk and provide some guidance toward managing risk. Likeall these tools, however, we have to remember that the numbers producedprovide only guidance. Nature can always come up with unanticipatedevents. We need to use these tools combined with common sense. Risk man-agement is first and foremost the art of managing risk.

APPEND IX 9 . 1 : PARAMETR I C EST IMAT I ONUS ING SECOND DER I VAT I V ES

We now go through an example of using second derivatives for parametricestimation, as laid out in the appendix of Chapter 8. As discussed there,second derivatives can be used with an asymptotic Cornish-Fisher expan-sion for the inverse CDF to provide a flag for when nonlinearities are largeenough to make a difference in quantile (VaR) estimation.

For parametric or linear estimation (for a single risk factor, using firstderivatives only and assuming m ¼ 0) the portfolio mean and variance are:

1st moment: zero

2nd moment: d2s2

Using second derivatives, the first four central moments are:

1st: ½gs2

2nd: d2s2 þ½g2s4

) volatility ¼ p½d2s2 þ½g2s4�3rd: g3s6 þ 3d2gs4

) skew ¼ ½g3s6 þ 3d2gs4�=½d2s2 þ½g2s4�1:54th: 3d4s4 þ 15=4g4s8 þ 15d2g2s6

) kurtosis ¼ ½3d4s4 þ 15=4g4s8 þ 15d2g2s6�=½d2s2 þ½g2s4�) excess kurtosis ¼ ½3d4s4 þ 15=4g4s8 þ 15d2g2s6�=½d2s2 þ½g2s4� � 3

We can examine this for the $20M position in the 10-year U.S. Trea-sury. A bond is a reasonably linear asset and so we should expect no non-linear effects from including the second derivatives. The delta and gamma(first and second derivative, or DV01 and convexity) are:

d ¼ $18;292=bp g ¼ $17:59=bp2

Using Volatility and VaR 307

Page 327: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C09 02/15/2012 13:26:3 Page 308

while the volatility of yields is 7.15bp per day. This gives the followingmoments:

Linear Quadratic

Mean 0.0 450Volatility 130,789 130,791Skew 0.0 0.0206Excess Kurtosis 0.0 0.0006

The skew and kurtosis are so small they clearly will have no discernibleimpact on the shape of the P&L distribution.

In contrast, an option will have considerable nonlinearity (gamma orsecond derivative) and thus we should expect to see considerable nonlineareffects in the P&L distribution. Consider short $20M of a six-month optionon a five-year par bond or swap.12 The delta and gamma will be:

d ¼ $3;015:3=bp g ¼ $135:9=bp2

while the volatility of yields is 7.0735bp per day. This gives the followingmoments:

Linear Quadratic

Mean 0.0 �3,399Volatility 21,392 21,864Skew 0.0 �0.9178Excess Kurtosis 0.0 1.1321

These are reasonably large. To see whether they are large enough toalter the quantiles of the P&L distribution, we can use the Cornish-Fisherexpansion (from Appendix 8.2) to calculate approximate quantiles.

w � xþ ½1=6� ðx2 � 1Þ �m3� þ ½1=24� ðx3 � 3xÞ �m4

� 1=36� ð2x3 � 5xÞ �m23� þ ½1=120� ðx4 � 6x2 þ 3Þ � g3

� 1=24� ðx4 � 5x2 þ 2Þ �m3 �m4

þ 1=324� ð12x4 � 53x2 þ 17Þ �m31�

12A six-month out of the money option struck at 1.68 percent with the forward rateat 1.78 percent (put on rates or call on the bond or receiver’s swaption) with theshort rate at 0.50 percent and volatility 20 percent.

308 QUANTITATIVE RISK MANAGEMENT

Page 328: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C09 02/15/2012 13:26:3 Page 309

where y¼ mþ sw is solution to inverse pdf: F(y) ¼ prob, that is, approxi-mate Cornish-Fisher critical value for a probability level prob.

x¼ solution to standard normal pdf: F(x) ¼ prob, that is, the criti-cal value for probability level prob with a standard normal dis-tribution (note that this is the lower tail probability so that x ¼�1.6449 for prob ¼ 0.05, x ¼ 1.6449 for prob ¼ 0.95).

m3¼ skewm4¼ excess kurtosisg3¼ k5/s

5

k5¼ 5th cumulant

Table 9.17 shows the approximate quantiles calculated from the pre-ceding expression. (The first order includes only the m3 term in the firstsquare brackets, the second order includes m4 and m2

3 in the second squarebrackets but excludes all the terms in the third square bracket.) The approx-imate quantiles show that the nonlinearity in the option payoff substantiallyalters the P&L distribution relative to the normal, with the lower tail beingsubstantially longer (the 1 percent lower quantile being below �3 versus the�2.326 normal quantile) and the upper tail shorter.

The final row shows the actual quantile. One can see from this thatusing only the leading terms in the Cornish-Fisher expansion does not pro-vide a particularly good approximation for the quantiles far out in the tails.Although the second-order approximation is good for the 84.1 percent and15.9 percent quantiles, it is increasingly poor as one moves further out tothe 99 percent and 1 percent quantiles in the tails.

Examining Table 9.17 one might think that the first-order expansionworks better than the second order in the tails but such a conclusion is notjustified. The fact is that the expansion may be nonmonotonic, particularlywhen the skew and kurtosis are large. Consider the same option but withonly one month rather than six months until maturity. Such an option will

TABLE 9.17 Approximate Quantiles for Short Option Position

Probability 0.004 0.010 0.050 0.159 0.841 0.950 0.990 0.996

Distance from mean (number of true standard deviations)

Normal �2.659 �2.326 �1.645 �0.999 0.999 1.645 2.326 2.659Cornish-Fisher1st order

�3.652 �3.080 �2.011 �1.126 0.823 1.199 1.460 1.537

Cornish-Fisher2nd order

�3.595 �3.029 �1.974 �1.102 0.800 1.161 1.409 1.480

True Quantiles �3.734 �3.127 �2.011 �1.110 0.809 1.198 1.515 1.640

Using Volatility and VaR 309

Page 329: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C09 02/15/2012 13:26:4 Page 310

have higher gamma and thus more skew. Table 9.18 shows the resultingquantiles and approximations. For both the first- and second-order approx-imations, the approximate quantiles in the upper tails decrease rather thanincrease. The approximation with a finite number of terms is simply notvery good. (See Chernozhukov, Fernandez-Val, and Galichon 2007 for adiscussion of some methods for improving the approximation.)

Nonetheless this approach does exactly what is intended—provides aflag for when the distribution deviates substantially from normality due tononlinearity in the asset payoff. The conclusion is that using the secondderivatives together with the Cornish-Fisher expansion provides a flag forwhen nonlinearity becomes important, even though it may not itself providean adequate approximation.

TABLE 9.18 Approximate Quantiles for Short Option Position—One-MonthOption (high gamma)

Probability 0.004 0.010 0.050 0.159 0.841 0.950 0.990 0.996

Distance from mean (number of true standard deviations)

Normal �2.659 �2.326 �1.645 �0.999 0.999 1.645 2.326 2.659Cornish-Fisher1st order

�4.346 �3.573 �2.198 �1.155 0.465 0.470 0.201 �0.033

Cornish-Fisher2nd order

�4.189 �3.384 �2.004 �1.014 0.324 0.277 0.012 �0.189

True Quantiles �5.378 �4.263 �2.357 �1.076 0.381 0.460 0.489 0.494

310 QUANTITATIVE RISK MANAGEMENT

Page 330: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:34 Page 311

CHAPTER 10Portfolio Risk Analytics

and Reporting

Managing risk requires actually making decisions to increase, decrease,or alter the profile of risk. Making such decisions requires knowing not

just the level of risk (the dispersion of the P&L distribution) but also thesources of risk in the portfolio and how changes in positions are likely toalter the portfolio risk. Risk measurement, to support this, must not onlymeasure the dispersion of P&L (the primary focus for Chapters 8 and 9),but also the sources of risk. Litterman (1996, 59) expresses this well:

Volatility and VaR characterize, in slightly different ways, the de-gree of dispersion in the distribution of gains and losses, and there-fore are useful for monitoring risk. They do not, however, providemuch guidance for risk management. To manage risk, you have tounderstand what the sources of risk are in the portfolio and whattrades will provide effective ways to reduce risk. Thus, risk man-agement requires additional analysis—in particular, a decomposi-tion of risk, an ability to find potential hedges, and an ability tofind simple representations for complex positions.

In this sense, risk management merges into portfolio management. Thepresent chapter discusses some of the tools and techniques suitable for suchportfolio risk analysis. I particularly focus on:

& Volatility and triangle addition as an aid for understanding how riskscombine.

& Marginal contribution to risk (also known in the literature as risk con-tribution, VaR contribution, delta VaR, incremental VaR, componentVaR) as a tool for understanding the current risk profile of a portfolio.

311

Page 331: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:34 Page 312

& Best hedges and replicating portfolios as a tool for picking hedges toalter the risk profile and understanding the risk profile of a portfolio.

& Principal components as a data reduction and risk aggregationtechnique.

Many of the ideas in this chapter are based on Robert Litterman’s HotSpots and Hedges (Litterman 1996), some of which also appeared in Riskmagazine March 1997 and May 1997. The idea of contribution to risk wasdeveloped independently by Litterman and M. B. Garman (Risk magazine1996).

These techniques are most suitable for measuring and understandingrisk under standard trading conditions (as opposed to tail events). That is,these techniques are most suitable when applied to the volatility (or theVaR for a large value of Z, such as the 16%/84% VaR). This is not aweakness—remember that risk must be managed every day and most trad-ing days are standard conditions. Furthermore, many of the techniques arebased on linear approximations (in other words assuming that the drivingrisk factors are normal and the positions linear in these factors). These tech-niques will not capture nonlinearities. This is important to remember, asnonlinearities have become increasingly important over the past 20 yearswith the increasing use of options and other nonlinear instruments. But theissue of nonlinearity should not be blown out of proportion. The linear ap-proach has great utility in situations in which it is applicable. A simple ap-proach can provide powerful insights where it is applicable and many, evenmost, portfolios are locally linear and amenable to these techniques. Again,Litterman (1996, 53) summarizes the situation well:

Many risk managers today seem to forget that the key benefit of asimple approach, such as the linear approximation implicit in tradi-tional portfolio analysis, is the powerful insight it can provide incontexts where it is valid.

With very few exceptions, portfolios will have locally linearexposures about which the application of portfolio risk analysistools can provide useful information.

10 .1 VOLAT I L I TY , TR I ANGL E ADD I T I ON ,AND R ISK REDUCT I ON

As a guide for understanding how the risk of assets combine to yield thetotal portfolio risk, volatility and linear approximations are extremely use-ful. Volatility may not be the perfect measure of risk, but the intuition it

312 QUANTITATIVE RISK MANAGEMENT

Page 332: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:35 Page 313

builds regarding the often-nonintuitive aggregation of risk is effective, eveninvaluable.

The volatility of a portfolio is calculated from the volatilities of twoassets according to:1

s2p ¼ s2

1 þ 2rs1s2 þ s22 ð10:1aÞ

The cross term 2rs1s2 means that the portfolio variance and volatility arenot the simple addition of the position variances or volatilities. In fact, posi-tion volatilities combine like the legs of a triangle:2

A2 ¼ B2 � 2� B� C� cos uþ C2 ð10:1bÞ

The two expressions will be equivalent when cos u ¼ – r:

A2 ¼ B2 � 2� B� C� cos uþ C2 , s2p ¼ s2

1 þ 2rs1s2 þ s22

for cos u ¼ �r

Consider the portfolio of $20M U.S. Treasury bond and D7M nominal ofCAC equity index futures considered in Chapter 9, shown in Table 10.1.

In Figure 10.1, Panel A shows the combination of the two volatilities asthe two legs of a triangle. For the triangle, the A-leg is shorter than the sumof B þ C because the angle is less than 180� (the correlation is less than 1.0).In terms of volatilities, the portfolio volatility is less than the sum of theUST and CAC volatility because the correlation is less than 1.0. If the anglewere 180� (the correlation were 1.0), then the resulting leg (the portfoliovolatility) would be the straightforward sum.

Panel B shows the situation when the CAC position is reversed: – D7Mnominal of CAC futures instead of þ D7M. The individual volatilities are

1 This assumes the volatilities si are measured in dollar terms as in the followingexample. If they are measured as volatilities per unit or in percentage returns, andthe asset portfolio weights are vi, the expression will be s2

p ¼ v21s

21þ

2rv1s1v2s2 þ v22s

22:

2 I know of this analogy between volatilities and triangles from Litterman (1996) butLitterman notes that it may have been used earlier. For those with a background invectors and complex numbers, the volatilities add as vectors and the vectors (volatil-ities) can be conveniently represented as complex numbers in polar notation(remembering that the convention for complex numbers is to measure the anglefrom the x-axis so that the angle f for the complex representation is f ¼ 180 – u orcos f ¼ r). This analogy does not to extend conveniently to three volatilities becauseof the difficulty in defining the angles between vectors.

Portfolio Risk Analytics and Reporting 313

Page 333: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:35 Page 314

the same, but the portfolio volatility (the A-leg of the triangle) is nowshorter—the portfolio volatility is only $236,400. Were the angle to be 0�

(correlation –1.0), the length of the A-leg (the portfolio volatility) would bemuch reduced (in fact, only $100,025).

Corre l a t i o n and R i s k Reduc t i o n Po t en t i a l

The triangle addition for volatilities can be used to understand the potentialfor risk reduction and how this varies with the correlation between assets.In Figure 10.1, the combinations considered are þ$20M UST and �D7Mnominal of CAC futures. Alternatively we could take the þ$20M UST asfixed and consider the CAC as a hedge, varying the amount of the futures.We could ask by how much the UST volatility can be reduced through hedg-ing: What is the potential for risk reduction? Precisely, we could calculatethe percentage reduction in volatility that we could achieve by optimallyhedging the U.S. bond with the CAC futures.

Panel B in Figure 10.1 shows þ $20M in UST and – D7M nominal ofCAC, with an angle of u ¼ 76� between them (cos 76� ¼ 0.24 ¼ –r). Hedg-ing the UST with the CAC means keeping the amount of UST fixed (the baseline B), while varying the amount of the CAC (length of line C), withthe angle between them determined by the correlation (u ¼ arccos(–r)). Ifwe wish to minimize the resulting combined volatility (the line A), thenit should be clear that A must make a right angle with C, as shown inFigure 10.2. But in that case, we have a right triangle with hypotenuse B,and A ¼ B sin u. The reduction in volatility is B – A and the proportionalreduction, or the risk reduction potential, is (B – A)/B:

Risk Reduction Potential ¼ 1� A=B ¼ 1� sin u ¼ 1� sin arccos �rð Þð Þð10:2aÞ

TABLE 10.1 Volatility for Government Bond and CAC Equity Index Futures(reproduced from Table 9.7)

Stand-AloneVolatility

Actual PortfolioVolatility

Sum of Stand-AloneVolatility

UST 10-yr bond $130,800CAC equity $230,800UST þ CAC $291,300 $361,600

Based on Table 5.2 from A Practical Guide to Risk Management, # 2011 by theResearch Foundation of CFA Institute.

314 QUANTITATIVE RISK MANAGEMENT

Page 334: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:35 Page 315

APortfolio Volatility

(+$20M UST + €7M CAC)

$291,300

104°

(+$20M UST)B $130,800

$230,825

C(+€7M CAC)

A. Long UST and Long CAC

APortfolio Volatility

(+$20M UST – €7M CAC)

$236,400

76°

(+$20M UST)B $130,800

$230,825

C(–€7M CAC)

B. Long UST and Short CAC

FIGURE 10.1 Volatilities Combine as Legs of aTriangle (Vector Addition)Reproduced from Figure 5.12 of A Practical Guideto Risk Management,# 2011 by the ResearchFoundation of CFA Institute.

Portfolio Risk Analytics and Reporting 315

Page 335: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:35 Page 316

An alternative but longer way of deriving the same result is to useequation (10.1a). Take the amount of the UST as fixed at $20M and let theamount of the CAC be a (in units of D7M). The correlation between them isr (0.24 in this case). Then the volatility of the hedged position, sh, will begiven by

s2h ¼ s2

1 þ 2rs1as2 þ a2s22

This will be minimized when a ¼ –r s1/s2. The hedged volatility as a pro-portion of the original (UST) volatility will be sh/s1 ¼

ffiffiffiffiffiffiffiffiffiffiffiffiffi

1� r2p

. This meansthat when the correlation between two assets is r, the maximum propor-tional reduction in volatility will be:

Risk Reduction Potential ¼ 1�ffiffiffiffiffiffiffiffiffiffiffiffiffi

1� r2p

ð10:2bÞ

For the UST and CAC, where correlation is 0.24, using either (10.2a) or(10.2b), the risk reduction potential is only 3 percent. This is very low, andmeans that using CAC futures as a hedge for the U.S. bond would be almostcompletely ineffective. Table 10.2 shows the risk reduction potential forvarious levels of correlation.

As Litterman (1996, 62) points out, ‘‘Many traders and portfolio man-agers may not be aware of how sensitive risk reduction is to the degree ofcorrelation between the returns of the positions being hedged and the hedg-ing instruments.’’ Litterman (1996, Exhibit 17) also has a useful diagramshowing the potential risk reduction as a function of the correlation andangle.

Portfolio Volatility(+$20M UST – €950k CAC)

$127,00076°

(+$20M UST )

B

$130,800

$31,400

(–€950k CAC)CA

FIGURE 10.2 Triangle Addition and Risk ReductionPotentialNote: This shows side C (the amount of the CAC futuresin this case) chosen to provide maximum risk reductionor optimal hedge for side B (U.S. bond in this case).Reproduced from Figure 5.13 of A Practical Guide toRisk Management,# 2011 by the Research Foundationof CFA Institute.

316 QUANTITATIVE RISK MANAGEMENT

Page 336: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:35 Page 317

The example here was using two individual assets, but the result carriesover when we consider a composite portfolio hedged by one single asset.

10 . 2 CONTR IBUT I ON TO R I SK

Volatilities and variances do not add and equation (10.1) does not, on thesurface, provide a decomposition of portfolio volatility into contributionsdue to individual assets or groups of assets. Nonetheless, there are two usefulways we can define the contribution a position makes to the volatility or VaR:

1. Infinitesimal: change in volatility or VaR due to an infinitesimal changein a position.

2. All-or-nothing: change in volatility or VaR due to complete removal ofa position.

In my view, the infinitesimal, or marginal contribution to risk, and thedecomposition it provides, is one of the most powerful but underappreci-ated tools for risk analysis. Such a contribution to risk provides a usefuldecomposition of the current risk profile by showing how the current posi-tions affect the current portfolio, aiding in the understanding of the portfo-lio. Positions in a portfolio are usually adjusted little by little rather than bycomplete removal of a position, and the marginal contribution provides agood estimate of this for a large portfolio with many small positions. I findthe infinitesimal, rather than the all-or-nothing measure, to be far the moreuseful. Although the change due to complete removal of an asset (setting tozero position) is valuable information, I think the best hedges analysis, dis-cussed further on, is generally more useful.

TABLE 10.2 Risk Reduction Potential for Various Levels of Correlation

Correlation Angle u Risk Reduction Potential

�0.99 8.1� 85.9%�0.90 25.8� 56.4%�0.80 36.9� 40.0%�0.50 60.0� 13.4%�0.25 75.5� 3.2%

Note: This shows the proportional reduction in volatility, {Vol(no hedge) – Vol(hedge)}/Vol(hedge), that is possible with various values for the correlation betweenthe original asset and the hedging asset. Reproduced from Table 5.3 of A PracticalGuide to Risk Management,# 2011 by the Research Foundation of CFA Institute.

Portfolio Risk Analytics and Reporting 317

Page 337: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:36 Page 318

Unfortunately, there is no agreement in the literature, and considerableconfusion, regarding nomenclature, and this provides a barrier to betterunderstanding of contribution to risk. Particularly confusing is thatRiskMetrics uses the word marginal for the all-or-nothing measure (eventhough the word marginal is commonly used to denote small changes at themargin and not large, finite changes) and uses the word incremental forthe infinitesimal measure (again somewhat contrary to common usage ofthe word incremental). Most of the literature uses the reverse terminology.Nor are texts always clear in their explication of the concept.

Table 10.3 is a quick guide to the various terms used by differentwriters.

Marg i n a l C on t r i bu t i o n t o R i s k

The idea of marginal contribution to risk was introduced independently byRobert Litterman in Hot Spots and Hedges (Litterman 1996) and M. B.Garman (Risk magazine 1996). We start by considering the marginal con-tribution to volatility. It will be shown, however, that the concept of contri-bution to risk is also applicable to most commonly used risk measures (e.g.,VaR, expected shortfall, but not probability of shortfall).

TABLE 10.3 Terms Used in the Literature for the Infinitesimal and All-or-NothingDecompositions of Risk

Infinitesimal All-or-Nothing

This monograph marginal contribution orcontribution to risk

all-or-nothingcontribution to risk

Litterman (1996) contribution to riskCrouhy, Galai, andMark (2001)

delta VaR incremental VaR

Marrison (2002) VaR contributionMina and Xiao/RiskMetrics (2001)

incremental VaR marginal VaR

Jorion (2007)3 marginal VaR andcomponent VaR

incremental VaR

Reproduced from Exhibit 5.2 of A Practical Guide to Risk Management,# 2011 bythe Research Foundation of CFA Institute.

3 Jorion’s explication of these ideas is unfortunately somewhat confusing—his mar-ginal VaR is not additive and he fails to point out the marginal nature of his compo-nent VaR (my marginal contribution). See the discussion in a subsequent footnote.

318 QUANTITATIVE RISK MANAGEMENT

Page 338: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:36 Page 319

To start, consider equation (10.1a) for the variance of two assets, butnow include explicitly the weights of the asset holdings so that si is the vola-tility for a unit holding of the position, and vi is the amount of the holding(measured in dollars, number of bonds, percent of the portfolio, whateverthe appropriate unit is). This can be written as:

s2p ¼ v2

1s11 þ v1v2s12 þ v1v2s12 þ v22s22

and the volatility as:

sp ¼ v21s11 þ v1v2s12 þ v1v2s12 þ v2

2s22

sp

This suggests a simple ad hoc decomposition of the volatility into con-stituent parts;4 the term

MCL1 ¼ v21s11 þ v1v2s12

sp

being defined as that portion attributable to asset 1, while a similar termgives the contribution for asset 2, so that:

sp ¼ v21s11 þ v1v2s12

spþ v1v2s12 þ v2

2s22

sp¼ MCL1 þMCL2 ð10:3aÞ

That is, the volatility can be decomposed into additive contributionsfrom the two assets. So far, this is just an ad hoc decomposition of the vola-tility. The wonderful thing is that we arrive at exactly the same additivedecomposition if we consider an infinitesimal or marginal change in the vol-atility resulting from infinitesimal changes in the asset holdings or weights.First, we rewrite the expression for volatility (using the column vector v0 ¼[v1, . . . , vn] and the variance-covariance matrix S) as:

sp ¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

v21s11 þ v1v2s12 þ v2v1s21 þ v2

2s22

q

¼ffiffiffiffiffiffiffiffiffiffiffi

v0Svp

¼ v0Svsp

ð10:3bÞ

4Marrison (2002, ch. 7) has a nice explication of the marginal contribution (Marri-son calls it VaR contribution) with clear formulae in both summation and matrixnotation. Unfortunately, Marrison does not point out the marginal nature of themeasure discussed next (that it gives the infinitesimal change in volatility for an in-finitesimal percent change in asset holding) but otherwise the discussion is veryuseful.

Portfolio Risk Analytics and Reporting 319

Page 339: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:36 Page 320

It will be convenient to write this in a way that keeps track of the indi-vidual components vi. I will use the notation [v0S]i to denote the ithelement of the row vector v0S and [Sv]i to denote the ith element of thecolumn vector Sv. That is,

v0S ¼ v1 � � � vn½ � �s11 � � � s1n

}sn1 � � � snn

2

4

3

5 ¼ a1 � � � an½ �

v0S½ �i ¼ ai

Sv ¼s11 � � � s1n

}sn1 � � � snn

2

4

3

5�v1

..

.

vn

2

6

4

3

7

5

¼b1

..

.

bn

2

6

4

3

7

5

Sv½ �i ¼ bi

Note that [v0S]i and [Sv]i are the covariance of asset i with the portfolio.Using this notation,

v0Sv ¼ v0S½ �1v1 þ � � � þ v0S½ �nvn ¼ v1 Sv½ �1 þ � � � þ vn Sv½ �nso that sp can be written as:

sp ¼ v0Svsp

¼ v0S½ �1v1

spþ � � � þ v0S½ �nvn

sp¼ v0

1 vS½ �1sp

þ � � � þ v0n vS½ �nsp

ð10:3cÞ

This gives an additive decomposition, with the terms [v0S]ivi/sp sum-ming to sp; essentially the same as equation (10.3a). We can divide by anadditional sp to get an additive decomposition in proportional terms:

1:00 ¼ v0Svs2p

¼ v0S½ �1v1

s2p

þ � � � þ v0S½ �nvn

s2p

¼ v01 vS½ �1s2p

þ � � � þ v0n vS½ �ns2p

ð10:3dÞ

where now the terms [v0S]ivi/s2p sum to 1.00.

320 QUANTITATIVE RISK MANAGEMENT

Page 340: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:36 Page 321

Alternatively, we can start with the volatility sp and take a totaldifferential:

dsp ¼ qsp

qv

� �

dv ¼X

i

v0S½ �isp

dvi ¼ v0S½ �1sp

dv1 þ � � � þ v0S½ �nsp

dvn

If we consider infinitesimal percent changes in the vi, then we dividethrough by an additional factor of v and we arrive at:

dsp ¼X

i

v0S½ �ivi

spd ln vi ¼ v0S½ �1v1

spd ln v1 þ � � � þ v0S½ �nvn

spd ln vn ð10:3eÞ

This gives exactly the same additive decomposition as (10.3c), with theterms [v0]ivi/sp summing to sp. Also, we can take an infinitesimal percentchange in volatility:

d ln sp ¼X

i

v0S½ �ivi

s2p

d lnvi ¼ v0S½ �1v1

s2p

d lnv1 þ � � � þ v0S½ �nvn

s2p

d ln vn ð10:3fÞ

which gives the proportional decomposition of (10.3d) with the terms[v0S]ivi/s

2p summing to 1.00. In other words, equations (10.3c) and (10.3d)

provide a useful additive decomposition of the total volatility, which is thesame as the decomposition of an infinitesimal change in the volatility.5 We

5 Jorion (2007), while sound in most areas, falls down here. Jorion (2007, section7.2.1) introduces marginal VaR as dsp ¼ P

i[v0S]i/sp dvi (a function of dv) rather

than dsp ¼P

i[v0S]i vi /sp d lnvi (a function of d ln v). Such a definition is valid but

of limited usefulness; it does not provide an additive decomposition of volatility orVaR since the components of [v0S]i /sp do not add up to the total volatility. Jorion’ssection 7.2.3 then introduces the term component VaR for [v0S]i vi/sp (what I callmarginal contribution to risk). Component VaR is presented as a partition of theVaR ‘‘that indicates how much the portfolio VAR would change approximately ifthe given component was deleted.’’ This misses the point that [v0S]i vi /sp (Jorion’scomponent VaR, my marginal contribution) provides the same marginal analysis as[v0S]i/sp (Jorion’smarginal VaR) while also providing a valuable additive decompo-sition of the volatility or VaR. Furthermore, using the component VaR as an approx-imation to the change in VaR upon deletion of a particular asset is of limitedusefulness; the approximation is often poor, and the actual volatility change to azero position (the all-or-nothing contribution) is easy to calculate.

Portfolio Risk Analytics and Reporting 321

Page 341: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:37 Page 322

can call the terms in the decomposition the marginal contribution (levels) andmarginal contribution (proportional):

MCLi ¼ v0S½ �ivi

sp¼ v0

i Sv½ �isp

ð10:4aÞ

MCPi ¼ v0S½ �ivi

s2p

¼ v0i Sv½ �is2p

ð10:4bÞ

These terms give the contribution to the infinitesimal change in volatil-ity (levels or proportional) due to a small percent change in position (equa-tions (10.3e) and (10.3f)), and also provide an additive decomposition ofthe volatility (equation (10.3d)).

The derivation for the decomposition of volatility was based on the al-gebraic definition of the volatility (and variance) and made no assumptionsabout the functional form of the P&L distribution. It should therefore beclear that the decomposition will hold for any P&L distribution—normalor non-normal.6

Deeper insight into the decomposition can be gained by noting thatthe decomposition is a consequence of the linear homogeneity of the vola-tility. Linear homogeneity means that scaling all positions by some scalarfactor l scales the risk by the same factor. If sp ¼ Vol(v) is the volatility ofa portfolio with weights v, then scaling all positions by a constant lmeans:

Vol lvð Þ ¼ lVol ðvÞ

Euler’s law (Varian 1978, 269) states that any linearly homogenous(differentiable) function R(v) satisfies:

RðvÞ �X

n

i¼1

qRðvÞqvi

vi

The additive decomposition of volatility (10.3c) follows from thisdirectly. Similarly, the marginal decompositions (10.3e) and (10.3f)

6 The volatility itself may be more or less useful depending on the form of the P&Ldistribution, but when one does use the volatility as a risk measure, the marginaldecomposition applies for any distribution.

322 QUANTITATIVE RISK MANAGEMENT

Page 342: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:37 Page 323

follow directly:

dRðvÞ �X

n

i¼1

qRðvÞqvi

vid ln vi

d lnRðvÞ �X

n

i¼1

qRðvÞqvi

vi

Rd ln vi

The terms qR(v)/qvi vi will sum to R(v), and the terms qR(v)/qvi vi/Rwill sum to 1.00.

More importantly, Euler’s law means we can apply a similar additiveand marginal decomposition to any risk measure R(v) that is linearly homo-geneous. In fact, most risk measures used in practice (including volatility,VaR, and expected shortfall, but not probability of shortfall) are linearlyhomogeneous, so that a marginal decomposition can be applied to each ofthese.7

This also means that the concept of marginal contribution to risk doesnot depend on the particular estimation method used to calculate volatilityor VaR. Contribution to risk can be calculated using the parametric (delta-normal) approach, the Monte Carlo approach, or the historical simulationapproach.

McNeil, Frey, and Embrechts (2002, equations 6.23, 6.24, 6.26) giveformulae for contributions for volatility, VaR, and expected shortfall. Saythat the portfolio is made up of investment in n assets, the P&L for one unitof asset i being denoted by Xi, and the amount invested in asset i is vi. Thenthe total P&L is

P

iviXi, the Z% VaR is VaRz ¼ {Y s.t. P[P

iviXi Y] ¼ Z}

7 Litterman (1996, 64) and Crouhy, Gailai, and Mark (2001, 255) use Euler’s lawand the linear homogeneity of volatility or VaR, or both, to prove the marginal de-composition. Embrechts, McNeil, and Frey (2002, section 6.3) have a detailed dis-cussion of Euler’s law and application to capital allocation (and the full allocationproperty, which is equivalent to the additive decomposition discussed here). Such adecomposition will apply to any coherent risk measure, since linear homogeneity isone of the defining axioms for coherent risk measures. See Artzner et al. (1999) andEmbrechts, McNeil, and Frey (2002, section 6.1) for a discussion of coherent riskmeasures.

Portfolio Risk Analytics and Reporting 323

Page 343: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:37 Page 324

and the expected shortfall is ESz ¼ E[P

iviXi jP

iviXi VaRz]. The contri-butions are:

volatility: MCLi ¼ vi cov(viXi,P

iviXi)/pvariance(

P

iviXi)

VaR: MCLi ¼ vi E[XijP

iviXi ¼ VaRz]

ES: MCLi ¼ vi E[XijP

iviXi VaRz ]8,9

See the appendix for explicit formulae for simulations.It is also important to note that the marginal contribution can be calcu-

lated for groups of assets and for subportfolios, and explicit formulae formarginal contribution to volatility are given in the appendix.

For an example of using the contribution to volatility, consider theholdings of the U.S. Treasury and the CAC futures discussed earlier, andconsider a small (infinitesimal) percent change in each holding. Table 10.4shows the result, using equation (10.4), and shows that the marginal contri-bution to the portfolio volatility is much higher for the equity futures thanfor the bond.

The situation changes when the CAC futures position is short D7M,with now the CAC providing an even larger proportional contribution tothe portfolio volatility (although the overall portfolio volatility is nowlower; see Table 10.5).

8 For a normal distribution, the contributions to volatility, VaR, and expected short-fall are all proportional. Using the formulae in Section 8.4, we see that

MCLi(VaR, normal) ¼ vi [cov(viXi,P

iviXi) /pvar(

P

iviXi)] � F–1(z)MCLi(ES, normal) ¼ vi [cov(viXi,

P

iviXi) /pvar(

P

iviXi)] � f[F–1(z)]/zMcNeil, Frey, and Embrechts (2005, 260) show that the proportionality for

volatility, VaR, and expected shortfall holds for any elliptical distribution (and, fur-thermore, for any linearly homogeneous risk measure).9Note that Marrison’s (2005, 143–144) method for estimating contribution to VaRfor Monte Carlo seems to apply to expected shortfall, not VaR. The correct formulafor VaR should be:MCLi ¼ [ Lossi j Lossp ¼ VaR ] ¼ vi � [ Xi j

P

iviXi ¼ VaR ], thatis, the contribution to VaR for asset i is the loss due to asset i. (This will clearly beadditive since

P

iviXi ¼ VaR.) The problem naturally arises that the estimate forMCLi uses only the single observation vi � [Xi j

P

iviXi ¼ VaR]. For a particularMonte Carlo simulation, we cannot average over multiple scenarios since there isonly one scenario for which Lossp ¼

P

iviXi ¼ VaR. To average over multiple obser-vations for a particular asset i (that is, to obtain multiple vi�[ Xi j

P

iviXi ¼ VaR ] foran asset i) we would need to carry out multiple complete simulations, taking oneobservation vi�[ Xi j

P

iviXi ¼ VaR ] from each simulation.

324 QUANTITATIVE RISK MANAGEMENT

Page 344: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:37 Page 325

We could also ask: What is the CAC position for which the futures makeno contribution to the volatility? Zero is an obvious but not very valuableanswer. Generally there will be some nonzero CAC position such that the con-tribution to the overall portfolio volatility will be zero. In the present case,having a small short futures position will provide zero equity futures contribu-tion. Specifically, for a holding of –D950k, small changes in the holdings ofthe CAC futures will have almost no impact on the portfolio volatility.

The triangle addition of volatilities helps to illustrate what is happen-ing, and the situation is actually the same as shown in Figure 10.2.The CAC position is chosen so that the resultant portfolio volatility (side A)forms a right angle with side C (CAC volatility). Triangle additionalso helps show how and why such a position has zero contribution.Figure 10.3’s top panel shows a change in side C (CAC volatility—for

TABLE 10.4 Volatility for Simple Portfolio—With Contribution to Risk

Volatilityper $1Mholding

PositionVolatility

Marginal ContributionProportional [v2

i s2i þ

rvisivjsj]/s2p

þ$20M UST 10-yr bond $6,540 $130,800 0.287þD7M CAC futures $25,000 $230,800 0.713Portfolio Volatility $291,300

TABLE 10.5 Volatility for Simple Portfolio—Contribution for Short CAC Futures

Volatilityper $1Mholding

PositionVolatility

Marginal ContributionProportional [v2

is2i þ

rvisivjsj]/s2p

þ$20M UST 10-yr bond $6,500 $130,800 0.176–D7M CAC futures $25,000 $230,800 0.824Portfolio Volatility $236,400

TABLE 10.6 Volatility for Simple Portfolio—Zero Contribution for CAC

Volatility

per $1Mholding

PositionVolatility

Marginal Contribution

Proportional [v2is

2i þ

rvisivjsj]/s2p

þ$20M UST 10-yr bond $6,500 $130,800 1.0–D950k CAC futures $25,000 $31,390 0.0Portfolio Volatility $126,900

Portfolio Risk Analytics and Reporting 325

Page 345: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:37 Page 326

clarity, a large change rather than infinitesimal). In this case, leg A (portfo-lio volatility) changes in length by almost nothing. The bottom panel showsa change in side B (U.S. Treasury volatility), and here the length of side Achanges virtually one for one with side B.

The decomposition or marginal contribution to volatility is useful for theinsight provided into how the volatility will change for small changes in asingle position, all other positions held fixed. It is particularly useful for largeand complex portfolios, exactly the situation in which both intuition and aidssuch as the triangle diagrams (applicable only for two assets) break down.

Corre l a t i o n w i t h Por t f o l i o

It is also possible to calculate, from the marginal contribution, the correla-tion of an asset with the portfolio:

ri ¼ correlation of asset iwith portfolio

¼ ½Sv�i= sp � si

� � ¼ MCLi= vi � sið Þ:

Portfolio Volatility(+$20M UST –€950k CAC)

$127,00076°

(+$20M UST)

B

$130,800

$31,400

(–€950k CAC)C

A

A. Change in CAC Volatility (side C)

Portfolio Volatility(+$20M UST –€950k CAC)

$127,00076°

(+$20M UST)

B

$130,800

$31,400

(–€950k CAC)C

A

B. Change in UST Volatility (side B)

FIGURE 10.3 Triangle Addition of Volatilities for þ$20MUST, –D950k CAC futuresReproduced from Figure 5.14 of A Practical Guide to RiskManagement,# 2011 by the Research Foundation of CFAInstitute.

326 QUANTITATIVE RISK MANAGEMENT

Page 346: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:37 Page 327

This can also be extended to groups of assets or subportfolios using thepartitioning discussed in the appendix.

A l l - o r -No th i n g Con tr i b u t i on t o Vo l a t i l i t y or VaR

The marginal contribution gives the change in risk for a small (infinitesimalor marginal) change in position, but we will also find it useful to measurethe change if the position is entirely removed. This is the all-or-nothing con-tribution (called marginal contribution by RiskMetrics 1999, 2001, and in-cremental VaR by Galai, Crouhy, and Mark 2001 and Jorion 2006).

The formula for the volatility at a zero position is simple (see the appen-dix for its derivation):

Volatility at asset k zero position ¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

v0Sv� 2vk Sv½ �k þ vkskkvk

q

ð10:5Þ

The all-or-nothing contribution to volatility for asset k is the reductionin volatility moving to a zero position:

All-or-Nothing Contribution for asset k

¼ffiffiffiffiffiffiffiffiffiffiffi

v0Svp

�ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

v0Sv� 2vk Sv½ �k þ vkskkvk

q

10 .3 BEST HEDGE

When considering finite or large changes in an asset holding, it is very useful toconsider the change that would optimally hedge the rest of the portfolio. Wecan call this the best hedge—the position size that reduces the volatility asmuch as possible, or hedges the rest of the portfolio as effectively as possible.

To work out the best hedge position, consider that the marginal contri-bution attributable to a particular position may be either positive (adding tothe portfolio risk) or negative (lowering the portfolio risk—acting as ahedge). At some point, the marginal contribution will be zero. This will bethe position size that optimally hedges the rest of the portfolio.10 We can

10Trivially, for a zero position, the marginal contribution will be zero, but there willalso generally be a nonzero position such that the marginal contribution is zero, andthis is what we are interested in. This assumes that the position may be long or short(positive or negative). If a portfolio is constrained to be long only, then the besthedge position may not be obtainable but it is nonetheless useful for the insight itcan provide.

Portfolio Risk Analytics and Reporting 327

Page 347: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:37 Page 328

calculate the position size for asset k for which the marginal contributionis zero, given no changes in any other asset holdings. This means finding v

kthat satisfies

MCPk ¼ ½v0S�kvk=s2 ¼ 0 ) ½v0S�k ¼ P

i 6¼ksikvi þ skkvk ¼ 0

) Best Hedge ¼ vk ¼ �P

i6¼ksikvi=skk ¼ � v0S½ �k� � vkskk

� �

=skk

The point of zero marginal contribution is the point at which portfoliorisk is minimized with respect to the size of asset k since the marginal contri-bution is the derivative of the volatility with respect to its position. This willbe a best hedge in the sense of being the position in asset k that minimizesthe portfolio volatility (all other positions unchanged).

Earlier, for a portfolio with two assets, we used the triangle diagrams indiscussing risk reduction and contribution to risk. The best hedge is wherethe contribution is zero and where the length of the leg is such that the thirdleg (side A, or the portfolio volatility) is minimized or makes a right anglewith the leg under consideration.

For the preceding U.S. Treasury and CAC futures example (Figures10.2 and 10.3) we varied the size of the CAC position, keeping $20M of theU.S. Treasury. The marginal contribution of the CAC is zero when the CACposition is –D950 k, as seen in Table 10.6.

Figure 10.3 shows that the resultant volatility (triangle leg A) forms aright angle with the leg representing the CAC volatility. P&L for the U.S.Treasury and the CAC are positively correlated, so that the CAC best hedgeposition is actually a short position, hedging the $20M long U.S. Treasuryposition.

The portfolio volatility at the best hedge position (see the appendix forits derivation) is given by:

Volatility at asset k best hedge position ¼ spðkÞ ¼

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

v0Sv� Sv½ �k� �2

skk

s

ð10:6Þ

E xamp l es f or Marg i na l Con t r i b u t i on , A l l - o r -No t h i ng Con t r i b u t i o n , a nd Bes t Hedges

The concepts of marginal contribution to volatility, all-or-nothing contribu-tion, best hedge position, and best hedge volatility are actually very simple,but when first encountered they can be confusing. It can take some timeto recognize how they relate and when different measures are useful.

328 QUANTITATIVE RISK MANAGEMENT

Page 348: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:37 Page 329

So I will now turn to a simple example through which we can examine thedifferent measures.

We will continue with the example of the two-asset portfolio used be-fore: $20 million of the U.S. Treasury and D7 million of the CAC equityfutures. Tables 10.7 and 10.8 show the same portfolio as in Table 10.4 to-gether with additional measures. The volatility of the U.S. Treasury (consid-ered on its own) is about half that of the CAC futures even though thenotional is about double and the Treasury contributes about one-third ofthe total volatility. Tables 10.7 and 10.8 show the following additionalmeasures:

& Marginal contributions to volatility in both proportional and level form& Correlation with the portfolio& All-or-nothing contribution to volatility& Best hedge positions& Replicating position& Volatility at the best hedge position

TABLE 10.7 Volatility for Simple Portfolio—With Marginal Contribution andCorrelation

Position(stand-alone)Volatility

MCProportional

MCLevels

Correlationwith Port

$20.0M 10-yr UST 130,800 0.287 83,550 0.639D7.0M CAC Equity

Index230,800 0.713 207,800 0.900

Portfolio Volatility 291,300 1.000 291,300

TABLE 10.8 Volatility for Simple Portfolio—With All-or Nothing Contributionand Best Hedges

Position

(stand-alone)

Volatility

All-or-

Nothing

Contribution

Best

Hedge

Position

Replicating

Position

Volatility

at Best

Hedge

%

Volatility

Reduction

$20.0M 10-yr

UST130,800 60,490 –8.47 28.5 224,100 23.1

D7.0M CAC

Equity Index230,800 160,600 –0.95 7.95 126,900 56.4

Portfolio

Volatility291,300

Portfolio Risk Analytics and Reporting 329

Page 349: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:37 Page 330

The marginal contribution in proportional and level form show thesame thing, only reported in different ways. I personally prefer the propor-tional contribution. I look to the total volatility first to gain an estimate ofthe overall portfolio variability. I then turn to the marginal contribution(proportional) to see how different risk factors, asset classes, or subportfo-lios contribute to that overall volatility. Note, however, that my preferencefor the proportional contribution is just that—a personal preference. Eithermeasure is equally valid. Both marginal contributions sum to the total (to1.00 for the proportional, to the overall portfolio volatility for the levels)and this additive decomposition of the overall volatility is the strength ofeither form of the marginal contribution.

The all-or-nothing contribution for a specific asset or risk factor is howmuch the actual holding contributes to the volatility. It is the change in theportfolio volatility that occurs when we go from zero holding to the actualholding in that asset or risk factor. For this simple two-asset portfolio, theall-or-nothing contribution is almost trivial and can be easily worked outfrom the stand-alone volatilities in Table 10.7 or 10.8. The total portfoliovolatility is $291,300. Say we did not hold the U.S. Treasury. Only theCAC futures would remain, and so the portfolio volatility would be theCAC futures stand-alone volatility. The U.S. Treasury all-or-nothing contri-bution is the difference between the portfolio volatility and the CAC stand-alone volatility ($291,300–$230,800). Similarly, the CAC all-or-nothingcontribution is the difference between the overall portfolio volatility andthe U.S. Treasury stand-alone volatility ($291,300–$130,800).

For anything more than a simple two-asset portfolio, the calculation ofthe all-or-nothing contribution is not so trivial. Say we added $40 million ofa U.S. five-year Treasury. Tables 10.9 and 10.10 show the resulting volatilitiesand contributions. The five-year Treasury has a stand-alone volatility roughlythe same as the 10-year (it is about half the duration but otherwise behaves

TABLE 10.9 Volatility for Portfolio with Added 5-Year U.S. Treasury—MarginalContribution and Correlation

Position(stand-alone)

Volatility

MC

Proportional

MC

Levels

Correlation

with Port

$40.0M 5-yr UST 131,100 0.273 105,100 0.802$20.0M 10-yr UST 130,800 0.267 102,800 0.786D7.0M CAC EquityIndex

230,800 0.461 177,800 0.770

Portfolio Volatility 385,700 1.000 385,700

330 QUANTITATIVE RISK MANAGEMENT

Page 350: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:38 Page 331

similarly to the 10-year Treasury). Now the all-or-nothing contribution mustbe calculated rather than inferred directly from the stand-alone volatilities.

One method for calculating the all-or-nothing contribution, which isstraightforward but cumbersome, would be to revalue the portfolio multi-ple times, each time leaving out one position. When working in a variance-covariance framework, however, equation (10.5) provides a simple way tocalculate the all-or-nothing contribution. The calculation is particularlysimple because the term [Sv] shows up in virtually all the calculations weare discussing—marginal contribution, all-or-nothing contribution, andbest hedges. The term [Sv] denotes the vector that results from thedot-product of the market variance-covariance matrix and the vector ofpositions or deltas. In other words:

Sv½ �i ¼ ith element of S � v ¼X

n

j¼1

sijvj

This is the covariance of asset or risk factor i with the portfolio, and isthe central element of all the calculations. It goes into the calculation of themarginal contribution:

MCLi ¼ v0S½ �ivi

sp¼ v0

i Sv½ �isp

the correlation of asset or risk factor i with the portfolio:

ri ¼Sv½ �isisp

TABLE 10.10 Volatility for Portfolio with Added 5-Year U.S. Treasury—All-or-Nothing Contribution and Best Hedges

Position

(stand-

alone)

Volatility

All-or-

Nothing

Contribution

Best

Hedge

Position

Replicating

Position

Volatility

at Best

Hedge

%

Volatility

Reduction

$40.0M 5-yr UST 131,100 94,430 –54.4 94.4 230,400 40.3

$20.0M 10-yr

UST130,800 91,520 –26.4 46.4 238,300 38.2

D7.0M CAC

Equity Index230,800 130,900 –2.01 9.01 246,100 36.2

Portfolio

Volatility385,700

Portfolio Risk Analytics and Reporting 331

Page 351: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:38 Page 332

the all-or-nothing contribution (see equation (10.5)) and the best hedgevolatility (see equation (10.6)). Table 10.10 shows the results of applyingequation (10.5) for the calculation of the all-or-nothing contribution.

In my experience, the all-or-nothing contribution is the least useful ofthe measures discussed here. The marginal contribution, providing an addi-tive decomposition of the portfolio volatility and how the volatility changesfor small changes in holdings, provides useful information about smalladjustments to the portfolio. The best hedges, to which we now turn, pro-vide insight into the composition of the portfolio and useful information onhow the volatility changes for large adjustments in holdings.

Turn back to Table 10.8, the portfolio with only the 10-year Treasuryand the CAC futures. The ‘‘Best Hedge Position’’ is the holding that pro-vides the best hedge to the rest of the portfolio. For these two securities, wediscussed earlier the CAC holding that provided the best hedge to the U.S.Treasury. Table 10.8 shows this position—short D950k of the CAC futures.Holding this amount of the CAC futures provides the best hedge to the restof the portfolio (the rest of the portfolio in this case is just $20 million of the10-year Treasury).

In Section 10.2, we focused on the risk reduction potential of one assetversus another individual asset, and the relevant correlation was that be-tween the CAC and the 10-year Treasury. Looking at individual assets isfine when we have only two but does not work well when we have a multi-asset portfolio, as in Tables 10.9 and 10.10. We need to change our focus.We need to consider the CAC futures as one asset and the portfolio as asecond asset. We can measure the correlation between the CAC futuresand the whole portfolio (which includes some amount of the CAC futures).Table 10.8 shows that this correlation is 0.90, and referring back toTable 10.2, we can see that the risk reduction potential (using the CACfutures against the whole portfolio) is 56 percent—we can reduce the port-folio volatility 56 percent by optimally choosing the CAC futures position.Looking at Table 10.8, we see that the CAC best hedge reduces the portfoliovolatility by this: (291,300 – 126,900)/291,300 ¼ 56 percent.11

The idea of best hedges and correlation with the portfolio carries overto large portfolios. Tables 10.9 and 10.10 show three assets and the besthedges for each of the 5-year Treasury, 10-year Treasury, and CAC futures.Now the CAC best hedge is short D2.01M but the portfolio volatility is

11 In Section 10.2, the correlation between the CAC futures and the 10-year Trea-sury was 0.24, which gave risk reduction potential of only 3 percent. In that section,we were considering how much the 10-year Treasury volatility could be reduced,and from 130,800 to 126,900 is indeed only 3 percent. Here we are asking a differ-ent question—how much the portfolio volatility can be reduced.

332 QUANTITATIVE RISK MANAGEMENT

Page 352: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:38 Page 333

reduced by only 36.2 percent. The portfolio has much more bonds and be-haves more like a bond portfolio than the earlier portfolio. We can see, infact, that both the 5-year and the 10-year Treasury provide slightly betterhedges to the portfolio than the CAC futures.

One final point regarding the concepts of marginal contribution, besthedges, and so on. We must always exercise caution and judgment in usingsuch measures. They cannot provide definitive answers, only helpful guid-ance. They show us how our portfolio might have behaved in the past, butthe future is always uncertain. Tomorrow may bring not only random fluc-tuations in the markets, but more fundamental changes in marketrelationships.

For the portfolio described in Table 10.10, the five-year Treasury is thebest possible hedge assuming that markets behave in the future generally asthey have in the past. But there are two large cautions, for two fundamen-tally different reasons. First, the correlation between the five-year Treasuryand the portfolio is only 0.80, which means that on a particular day, there isstill a good chance the two will not move together. In other words, even ifmarkets behave as they have in the past, on a particular day, there is still agood chance the hedge will not work perfectly. As a user of such tools, it isincumbent on us to understand that best hedge does not mean perfect hedgeand in particular circumstances may not even be good hedge. We have tounderstand how well such a hedge works, for example, by looking at thecorrelation or the potential risk reduction.

The second caution in using such tools is that the future may bring fun-damental changes in market relations. The 5-year and 10-year Treasuriesgenerally move very much together, and this co-movement is what leads tothe five-year Treasury being the best hedge. But were this relation to breakdown, then the five-year Treasury might no longer be the best hedge. Wehave to understand that the results of an exercise such as in Table 10.10provide guidance but not definitive answers.

10 .4 REPL I CAT ING PORTFOL I O

Representing a complex portfolio in terms of a simpler portfolio is both use-ful and relatively straightforward. For a single asset k, the best hedge posi-tion v

k minimizes the portfolio variance when changing asset k only. Thisimplies that the difference between the original and best hedge position is amirror portfolio,

Single-Asset Mirror Portfolio using Asset k ¼ MPðkÞ ¼ vk � vk

Portfolio Risk Analytics and Reporting 333

Page 353: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:38 Page 334

in the sense that the variance of the difference between the original and mir-ror portfolio is minimized:

Variance ½Original Portfolio�MPðkÞ�¼ Variance Original Portfolio � vk � v

k

� �� � ¼ minimum w:r:t: asset k

It is simple to calculate the best single-asset mirror portfolio across allassets in the original portfolio: for each asset, calculate the volatility at thebest hedge position and choose from among all assets that single asset withthe lowest best-hedge volatility. It is natural to call this best mirror portfolioa replicating portfolio because it best replicates the portfolio (best using sin-gle assets chosen from the original portfolio).

The idea becomes clearer when we focus on an example. Table 10.10shows the best hedges, both the holdings and the volatilities at those hold-ings, for the simple portfolio consisting of long $40 million of the 5-yearTreasury, $20 million 10-year Treasury, and D7 million CAC futures. Thebest hedge using the 10-year Treasury would be to replace the long $20 mil-lion holding with a short of about $26 million. In other words, the besthedge would require shorting about $46 million of the 10-year Treasuryversus the existing portfolio. A new portfolio made up of the original port-folio combined with short $46 million of the 10-year Treasury would havethe volatility shown under ‘‘Volatility at Best Hedge.’’

When we examine the volatility at the best hedge holdings, we see thatthe five-year Treasury (a change of $94 million versus original portfolio)actually gives the lowest volatility among all choices. This implies that aholding of long $94 million in the 10-year Treasury is a mirror portfoliothat most closely replicates the existing portfolio, out of all possible single-asset portfolios using assets in the existing portfolio. In this sense, such aholding in the five-year Treasury is the best single-asset replicating portfo-lio. We can think of this in two ways. First, the portfolio behaves most likea five-year U.S. Treasury, and such a position explains 40 percent of thevolatility ((385,700 – 230,400)/385,700 ¼ 40 percent). This gives a simple,concise view of the portfolio. Second, shorting $94 million of the U.S. Trea-sury will provide a simple hedge, although in this case, it will not be veryeffective since the hedge only reduces the volatility by 40 percent.

One really important point we need to take from Table 10.10 is thedifference between what we can learn from the marginal contribution ver-sus the best hedges. In Table 10.9, we see that the CAC futures has byfar the largest marginal contribution. The CAC is the asset that contributesthe most to the portfolio volatility. But it will not provide the best hedge;the five-year Treasury actually provides the best hedge. The 10-year and

334 QUANTITATIVE RISK MANAGEMENT

Page 354: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:38 Page 335

5-year Treasuries are similar and the 5-year Treasury will hedge not onlythe 5-year Treasury in the portfolio but also the 10-year Treasury.

Mu l t i - A sse t Rep l i c a t i n g Por t f o l i o

Such a single-asset replicating portfolio gives a simple representation ofhow the full portfolio behaves, but it will usually be too simple to be usefulon its own. Fortunately, the replicating-portfolio idea extends in a straight-forward manner to multiple assets, to provide a replicating portfolio that isstill simple but more informative.

For two particular assets, j and k, the best hedge positions vj and v

k aregiven by:

Best Hedge ¼ vj

vk

� �

¼ � sjj sjk

skj skk

� ��1

� ½Sv�j½Sv�k

� �

þ vj

vk

� �

ð10:7Þ

This extends to more than two assets in the obvious manner. (See theappendix for its derivation.)

The volatility of the best-hedge portfolio with the positions for assets jand k set equal to the best hedge positions is:

Volatility at asset j andkbest hedge ¼ spðj& kÞ

¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

v0Sv� ½Sv�j ½Sv�k� ��

sjj sjk

skj skk

" #�1

�½Sv�j½Sv�k

" #

v

u

u

t

ð10:8Þ

Again, this extends to more than two assets in the obvious manner. (See theappendix for its derivation.)

The two-asset replicating portfolio is found by first defining the two-asset mirror portfolio for assets j and k as

Two-Asset Mirror Portfolio using assets j&k ¼ MPðj& kÞ ¼ vj � vj

vk � vk

� �

The replicating portfolio using two assets is that two-asset mirror port-folio with the lowest variance. Relatively small replicating portfolios, using3, 5, or 10 assets, can provide useful information and insight into thefull portfolio. The replicating portfolio can serve as a proxy, summary, orapproximation of the full portfolio, with the percent variance explainedby the replicating portfolio providing a measure of the quality of theapproximation.

Portfolio Risk Analytics and Reporting 335

Page 355: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:38 Page 336

One straightforward way to calculate the replicating portfolio using nassets is by brute force: Calculate the volatility (or variance) reduction re-sulting from all possible combinations of mirror portfolios using assetstaken n-at-a-time and then choose the best. The problem is simplified be-cause the best hedges variance and variance reduction can be calculatedquickly using the preceding formulae.

Such an approach is feasible when the number of assets to be searchedover, m, is relatively small (say, 40 or less) but becomes problematic whenthe number of assets in both the replicating portfolio and the original port-folio get large. For example, searching for the three-asset replicating portfo-lio when the original portfolio contains 20 assets involves only 1,140 cases.As Table 10.11 shows, however, the best 10-asset replicating portfolio foran original 50-asset portfolio requires searching more than 10.3 billioncases. In this case, some ad hoc strategy can be employed, such as searchingover only the top-40 assets measured by single-asset best hedge variance re-duction, or removing similar assets (for example, removing 10-year andleaving 30-year U.S. Treasury bonds) from the assets to be searched over.

Alternatively, for a large portfolio, a strategy analogous to stepwiseregression—building up the replicating portfolio assets one at a time—canbe employed. The simplest procedure is to add additional assets one at atime, without any checking of earlier assets:

& Choose the first replicating portfolio asset as the volatility-minimizingsingle best hedge.& That is, calculate s

pðkÞ for all k. This is the best-hedge volatility forall one-asset best hedges or mirror portfolios.

& Choose as the first replicating portfolio asset, 1, the asset k, whichproduces the smallest s

pðkÞ.& Choose the second replicating portfolio asset as that asset which, com-

bined with the first, produces the largest incremental reduction in port-folio variance.& That is, calculate s

pð1&kÞ for all k ¼ {all assets excluding the firstreplicating portfolio asset}. This is the best-hedge volatility for alltwo-asset best hedges that include the first replicating portfolioasset.

TABLE 10.11 Number of Portfolios Searched When Choosing Best n ofm Assets

Choose Best 3 Choose Best 5 Choose Best 10

20 assets 1,140 15,504 184,75650 assets 19,600 2,118,760 10.3109

336 QUANTITATIVE RISK MANAGEMENT

Page 356: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:38 Page 337

& Choose as the second replicating portfolio asset, 2, the k for whichspð1&kÞ is the smallest (or the variance reduction s2

p ½1� �s2p ½1&k� is the largest).

& Choose as the third replicating portfolio asset that asset that, com-bined with the first two, produces the largest reduction in portfoliovariance.& That is, calculate s2

p ð1&2&kÞ for all k ¼ {all assets excluding thefirst and second replicating portfolio assets}. This is the best-hedgevolatility for all three-asset best hedges that include the first two rep-licating portfolio assets.

& Choose as the third replicating portfolio asset, 3, the k for whichs2p ð1&2&kÞ is the smallest (or the variance reduction s2

p ½1&2�� s2

p ½1&2&k� is the largest).& Continue adding single replicating portfolio assets until the number of

desired replicating portfolio assets is obtained.

A more complex procedure, looking back at earlier replicating portfolioassets at each step to ensure they should still be included, is outlined in theappendix.

The discussion so far has focused on choosing a replicating portfoliofrom the assets within a portfolio. Alternatively, an externally specified setof assets can be used. The replicating portfolio weights can be chosen byusing linear regression analysis.12

10 .5 PR INC I PAL COMPONENTS ANDR ISK AGGREGAT I ON

Principal components is a data-reduction technique that can reduce the ef-fective data dimensionality and provide a summary view of risk intermedi-ate between the very granular level of individual trading desks and the veryaggregate level of portfolio volatility or VaR. It takes the original returnsfor assets or risk factors, represented by a vector Y ¼ [y1, . . . , yn]

0 (for

12 Examination of equation (10.7) for calculating best hedge weights shows that it isessentially the least-squares equations, using assets already included in the portfolio.For assets not in the portfolio, the history of the desired assets can be regressedagainst a synthetic history of the daily portfolio P&L (v0Y, where Y0 ¼ [y1y2 . . . yn], which is a vector of the historic changes in assets). The subset of constitu-ent assets are chosen from the full set either by brute-force search or by a stepwiseregression approach, just as before.

Portfolio Risk Analytics and Reporting 337

Page 357: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:38 Page 338

example the yields for 1-year, 2-year, . . . , 30-year bonds), and transformsit into a new set of variables, F, by means of a linear transformation:

F ¼ A0 � Y ð10:9Þ

where A0 ¼matrix of linear transformation coefficients. This is n � n,where n is number of original variables.

Y¼ column vector of original variables (for example, yields).F¼ [f1, . . . , fn]

0 ¼ column vector (1 to n) of new variables.

The trick is that we can choose A so that the new variables fi are or-thogonal (statistically uncorrelated). The orthogonality is particularlynice because it means that different factors in a sense span different andindependent dimensions of the risk. The benefit is that the separate fi willcontribute independently to the portfolio variance or VaR (assuming thatthe original Y are normally distributed or close to normal). Furthermore,the fi can be ordered in terms of size or contribution to the variance.In many practical cases, the first few fs contribute the lion’s share of thevariance and also have an easily understood meaning (for example, yieldcurves, with level, twist, hump). In this case, the new variables can be par-ticularly useful for reducing dimensionality and aggregating risk from alarge number of variables (for example, 20 to 30 yield curve points) to asmall number of orthogonal factors (for example, level, twist, hump,residual).

The new principal components can be used as an aid in summarizingrisk, in aggregating risk from disparate sources into a set of consistent factors(for example, trading desks that use different yield curve points), and to de-compose realized P&L into components due to independent factor changes.

Pr i nc i p a l Componen t s—Concep t s

Principal component analysis is detailed in the appendix. The main point isthat the new variables F, defined by 10.9 are uncorrelated (and, assumingthe Y are multivariate normal, independent). A simple example will help fixideas. Consider 2-year and 10-year rates. Assume that the rates have thefollowing volatilities:

ln vol rate bp vol bp vol daily

2-yr 20% 5% 100 6.26210-yr 15% 5% 75 4.697

338 QUANTITATIVE RISK MANAGEMENT

Page 358: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:38 Page 339

and that the correlation between 2-year and 10-year rates is 80 percent.Then (see the appendix for details) the matrix A will be:

A ¼ 0:8193 �0:57340:5734 0:8193

� �

The first column gives the transformation to f1, and the second thetransformation to f2:

f 1 ¼ 0:8193 y1 þ 0:5734 y2

f 2 ¼ �0:5734 y1 þ 0:8193 y2:

Given the history on y1 and y2, these new variables are statisticallyuncorrelated (and independent, assuming Y is multivariate normal).

We can also transform from F to Y according to:

Y ¼ A0�1F

and we can ask what the changes are in Y corresponding to 1-s moves inthe new factors. (See the appendix for the exact formula). The change inY due to a 1-s change in the fi are called the factor loadings, FL.Remember that f1 and f2 are independent, so we can think of such movesas occurring, on average, independently. In this example, the change in y2and y10 for a 1-s move in f1 and f2 are (see appendix):

1-smove in f 1ch y2 6:11ch y10 4:28

1-smove in f 2ch y2 �1:36ch y10 1:94

This is roughly a parallel shift (up 6.11 and 4.28 basis points for 2-yearand 10-year) and a curve twist (down 1.36 for the 2-year and up 1.94 basispoints for the 10-year). That is, for the history of 2-year and 10-year yieldmovements summarized in the preceding covariance matrix, such a parallelshift and curve twist have occurred independently.13

We can go further, and ask what is the risk with respect to the new fac-tors? Say that the sensitivity, measured as the P&L for a one basis pointmove in 2-year and 10-year yields, are –1 and 2:

13 It is a stylized fact that for many currencies, yield curves decompose into (approxi-mately) parallel, twist, and hump components.

Portfolio Risk Analytics and Reporting 339

Page 359: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:39 Page 340

risk ¼ delta ¼ �1 w:r:t: 1bp in 2 yr2 w:r:t: 1bp in 10 yr

Using the moves in 2-year and 10-year yields resulting from 1-s moves inf1 and f2, we can calculate the P&L for 1-s moves in the new factors:

risk ¼ delta ¼ 2:44 w:r:t: 1s inf 15:23 w:r:t: 1s inf 2

These are independent, so that the overall portfolio variance is just thesum of the component variances:

Portfolio variance ¼ 2:442 þ 5:232 ¼ 5:97þ 27:35

Not only is the variance simple to compute, but we can treat the portfo-lio as being exposed to the parallel and twist risks operating independently.Even more importantly, we can immediately see that the twist risk (compo-nent f2) is by far the more important risk.

Pr i nc i p a l Componen t s f or R i sk Aggrega t i on

Principal components can be used to aggregate risk, either to summarizedisaggregated risk or to aggregate across disparate trading operations.

Although portfolio volatility and VaR provide a valuable high-level sum-mary of the magnitude of risk (that is the power of VaR and quantitative-based risk measurement) they do not provide a view into the sources anddirection of the risk. At the other end of the spectrum, individual tradingunits calculate and report risk at a very granular or disaggregated level, asrequired for the micromanagement of the trading risk, but such a microview is too detailed for intermediate management and aggregation. An inter-mediate level, between the micro view used by the individual trader and thetop-level view provided by VaR, is valuable, and it can often be aided byprincipal components analysis.

More specifically, the P&L due to 1-s moves can provide anintermediate-level view. Principal components combine a history of marketrisk factors with the portfolio’s sensitivity to produce a description of prob-able P&L scenarios.14 This is particularly effective when the principal com-ponents have an easily understood interpretation, as usually occurs with

14More accurately, scenarios that occurred frequently in the past. The principalcomponents analysis is based on history and past results may not be representativeof future results. As I have stressed before, however, having a clear view of how theportfolio would have behaved in the past is the first step to better judgment abouthow it may behave in the future.

340 QUANTITATIVE RISK MANAGEMENT

Page 360: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:39 Page 341

developed-economy fixed-income markets (where the first three principalcomponents can usually be interpreted as shifts in yield curve level, slope,and curvature).

The simple preceding example, using just two yield curve points, showsthe first two components as being roughly shifts in yield curve level andslope. With the yield curve made up of only two elements (2- and 10-yearyields) the principal components provide little additional insight. In prac-tice, a realistic yield curve will often be made up of 20 or more elements (cf.Coleman 1998a for a description of building a forward curve from marketdata). The sensitivities with respect to detailed yield curve points provide agranular view of the risk, but it can often be difficult to summarize, andreducing them to three or four important components can indeed be useful.

Consider Table 10.12, which shows, for two hypothetical tradingdesks, sensitivities with respect to yield curve points. Such a sensitivity re-port is necessary for managing risk at a granular level, where a trader musthedge small market moves and where the trader is intimately familiar withmarket behavior. Such a report, however, is not useful at a level one ormore steps removed from minute-by-minute management of the risk. Itdoes not adequately summarize the risk, both because it has too much detailand, more importantly, because it does not provide any indication of thesize and type of yield curve moves that are likely to occur. For a divisionmanager not watching the markets hour by hour, having a summary thatincorporates market history can be invaluable. It becomes absolutely neces-sary when considering risk across multiple markets.

TABLE 10.12 Sample Sensitivity for Two Trading Desks

Desk One Desk Two Total

Maturity $/bp $/bp $/bp

3 mths 600 500 1,1006 mths 100 0 1001 yrs 0 0 02 yrs 1,800 1,500 3,3003 yrs 0 0 05 yrs 300 0 3007 yrs 0 0 010 yrs 500 0 50020 yrs 0 0 030 yrs –1,500 –2,000 –3,500

Note: This is the sensitivity to a 1bp fall in the appropriate yield; that is, a positivenumber means long the bond, making money when bond prices rise.

Portfolio Risk Analytics and Reporting 341

Page 361: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:39 Page 342

Turn now to Table 10.13, which shows the factor loadings, or the re-sponse to a 1-s move in the first three principal components. Figure 10.4shows these graphically. We can see that the first component is a shift in thelevel of the yield curve (a larger shift at the short end of the curve, and ashift down in yields giving a shift up in price). The second is a twist centeredat about seven years maturity. The third is a hump, again centered at aboutseven years maturity.

TABLE 10.13 Factor Loadings (Sensitivity to 1 � smove in principal components)

Factor Loading

Maturity Level Flattening Hump

3 mths –6.0 1.4 –1.06 mths –5.7 1.3 –0.91 yrs –5.5 1.2 –0.82 yrs –5.0 1.0 –0.53 yrs –4.8 0.8 –0.25 yrs –4.6 0.4 0.47 yrs –4.5 0.0 1.010 yrs –4.3 –0.2 0.720 yrs –4.2 –1.0 –0.230 yrs –4.0 –1.8 –1.2

Note: The ‘‘level’’ factor is written as a fall in yields, which denotes a rally in bondprices.

5 10 30

–6

–3

1

Level

Twist

Hump

FIGURE 10.4 First Three Components—Rally,Flattening, Hump

342 QUANTITATIVE RISK MANAGEMENT

Page 362: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:39 Page 343

The sensitivity to principal components can be calculated from the yieldcurve sensitivity shown in Table 10.12 by transforming with the factorloadings FL:15

Dpc ¼ �FL � Dyld

Table 10.14 shows the transformed sensitivity to principal componentfactors. Desk one is primarily exposed to a fall in the level of the curve (rallyin bond prices) making $10,700 for every 1-s shift down in level of the yieldcurve (1-s rally in bond prices). Desk two is primarily exposed toa flattening of the curve, losing money when the curve flattens (desk two isshort the long end of the curve in price terms, so loses money as the longyield falls and long prices rally relative to the short end). The two deskscombined have roughly equal sensitivity to the first two components, withvery little to the third or higher components.

When the first few factors account for a large proportion of the portfo-lio variance, the factor risk provides a concise summary of the likely P&L;that is, a concise summary of the probable risk and a useful aggregation ofthe risk to a small number of factors.

These ideas can be applied to individual trading units or subportfolioseven when they do not measure sensitivity at the same granular level. Saythat the second trading desk actually measured their sensitivity and man-aged their risk based on the yield curve points shown in Table 10.15. Thismakes direct comparison and aggregation of the risk across the units moredifficult.

Using the factors specific to desk one (shown in Table 10.13) and desktwo (in Table 10.15), one can transform the sensitivities for the two desks

TABLE 10.14 Sensitivity to Principal Component Factors

Desk One Desk Two Total

Exposure Contribution Exposure Contribution Exposure Contribution

Level $10,700 74.8% $2,500 14.5% $13,200 54.7%

Flattening �$ 5,549 20.1% �$5,856 79.4% �$11,404 40.8%

Hump �$ 719 0.3% �$1,178 3.2% �$ 1,897 1.1%

Residual $ 2,700 4.8% $1,121 2.9% $ 3,283 3.4%

15 In this example, there is a minus sign: Dpc ¼ –FL � Dyld, because the sensitivities aremeasured as $ per 1bp fall in yield, so that a positive denotes long with respect to arally in bond prices.

Portfolio Risk Analytics and Reporting 343

Page 363: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:39 Page 344

separately, but to the common yield curve components. Table 10.16 showsthe sensitivity with respect to the level, flattening, and hump components.

Pr i nc i p a l Componen t s and P&L Decompos i t i o n

The factor loadings and exposures can also be used after the fact to providea straightforward P&L attribution in terms of observed market movements.Say the observed changes in the yield curve for a particular day are writtenas the column vector (Dy.25, . . . , Dy30)

0. These observed changes can bedecomposed into factor moves, either using the factor loading matrix FL orby regressing the observed changes against a subset of the factor loadings:16

TABLE 10.15 Sample Sensitivity for Two Desks with Different Yield Curve Points

Desk One Desk TwoFactor Loading,

Desk Two

Maturity $/bp Maturity $/bp Maturity Level Flatten Hump

3 mths 600 1 yrs 500 1 yrs �5.5 1.2 �0.86 mths 100 2 yrs 1,500 2 yrs �5.0 1.0 �0.51 yrs 0 5 yrs 0 5 yrs �4.6 0.4 0.42 yrs 1,800 10 yrs 0 10 yrs �4.3 �0.2 0.73 yrs 0 30 yrs �2,000 30 yrs �4.0 �1.8 �1.25 yrs 3007 yrs 010 yrs 50020 yrs 030 yrs �1,500

Note: This is the sensitivity to a 1bp fall in the appropriate yield; that is, a positivenumber means long the bond, making money when bond prices rise.

TABLE 10.16 Sensitivity to Principal Component Factors

Desk One Desk Two Total

Exposure Contribution Exposure Contribution Exposure Contribution

Level $10,700 74.8% $2,250 11.7% $12,950 52.6%

Flattening –$5,549 20.1% –$5,778 77.3% –$11,326 40.3%

Hump –$719 0.3% –$1,289 3.8% –$2,008 1.3%

Residual $2,700 4.8% $1,097 2.8% $3,240 3.3%

16The change in Y and F are related by DY ¼ FL�DF, so DF ¼ FL�1�DY, but if onewishes to use only a subset of the components, then regression is a convenient way tocalculate a subset of the Dfs.

344 QUANTITATIVE RISK MANAGEMENT

Page 364: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:40 Page 345

Dy:25...

Dy30

0

B

@

1

C

A

¼ Df 1

f l1:25...

f l130

0

B

@

1

C

A

þ Df 2

f l2:25...

f l230

0

B

@

1

C

A

þ Df 3

f l3:25...

f l330

0

B

@

1

C

A

þ e

The {Df1, Df2, Df3} give the particular day’s market movement in termsof the first three factors: if Df1 is 1.5, this means that the observed marketmove is composed of 1.5 factor one moves plus amounts for the other fac-tors and a residual. The P&L due to these factors should be:

P&L attributable to factor i ¼ di � Df iwhere di is the sensitivity with respect to factor i.

Table 10.17 shows an example. Columns 1 and 2 show the change inyields for a particular day—the vector:

Dy:25...

Dy30

0

B

@

1

C

A

Regressing this against the first three columns of the factor-loadingmatrix in Table 10.13 gives the coefficients shown in the fourth column. Inthis case, the yield changes shown in the second column correspond toþ0.99 of factor 1, –2.00 of factor 2, and þ1.11 of factor 3. Multiplying theestimated factor changes by the risk for desk one shown in Table 10.14 thengives the P&L due to factors shown in the final column. In this case, theresidual is very small; we can explain most of the total P&L due to changes

TABLE 10.17 Changes in Yields, Changes in Factors, and P&L Due to FactorChanges

Maturity Yld ch bp Factor ch P&L desk 1

3 mths –9.80 Level 0.99 –10,6096 mths –9.27 Flattening –2.00 –11,1011 yrs –8.90 Hump 1.11 8002 yrs –7.72 Residual –603 yrs –6.64 Total –20,9705 yrs –4.807 yrs –3.4810 yrs –2.9720 yrs –2.2830 yrs –1.77

Portfolio Risk Analytics and Reporting 345

Page 365: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:40 Page 346

in the first three factors. For another day, we may not explain such a largeportion of the total P&L.

User -Chosen Fac t ors

Much of the analysis using principal components (for example, calculationof risk with respect to factors, P&L attribution to factors) can be appliedwith user-chosen factors. For example, one might prefer that the first yield-curve factor be a parallel shift with rates all moving by one basis point.

Consider an arbitrary (but full-rank) transformation of the yield curvepoints:

G ¼ B0 � Y ) DG ¼ B�1 � DF

The total portfolio variance is:

s2p ¼ D0

F � SY � DF ¼ D0F � B0�1 � B0 � SF � B � B�1 � DF ¼ D0

G � SG � DG

) SG ¼ B0 � SF � B

The analysis of stand-alone volatility, contribution to risk, and so on,can all be performed using DG and SG. One challenge is to determine thematrix B when, in general, only the first few columns will be chosen by theuser, with the remaining columns representing a residual. This can be ac-complished by Gram-Schmidt orthogonalization to produce the additionalcolumns. I think that, because the residual factors are constructed to be or-thogonal to the user-chosen factors, the variances of the two groups will beblock-independent so that it will make sense to talk about the proportion ofthe variance explained by the user-chosen versus residual factors.

10 .6 R I SK REPORT ING

Effective, intelligent, and useful risk reporting is as important as the under-lying analysis. Human intuition is not well adapted to recognize and man-age randomness. Risks combine within a portfolio in a nonlinear and oftenhighly nonintuitive manner. Even for the simplest case of normal distribu-tions, the volatility (standard deviation) and VaR do not add so that thevolatility or VaR of a portfolio is less than the sum of the constituents—thisis diversification. Various tools, techniques, and tricks need to be used toelucidate the risk for even relatively standard portfolios.

To illustrate and explain the techniques for analyzing portfolio risk, I focuson a small portfolio with diverse positions and risks, and on a sample risk reportthat includes the marginal contribution, best hedges, and so on. The intention

346 QUANTITATIVE RISK MANAGEMENT

Page 366: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:40 Page 347

is not only to explain what the measures are and how to calculate them, butalso to provide insight into how to use them and why they are valuable.

R i sk Repor t i n g—Bot t om Up versus Top Down

The risk reports discussed here and the analytics behind them are based on adetailed view of the portfolio, aggregated up to a top level for summary pur-poses. This is a bottom-up process. Alternatively one could view risk report-ing as a top-down process, the idea being that senior managers need abig-picture overview of firm-wide risk, and do not need to be concerned withdetails of individual desks or units. A top-down approach is often driven bythe number and complexity of assets and positions held by a large firm; atop-down approach allows many shortcuts and approximations that mightbe important at the micro level but do not matter much at the firm level.

Nonetheless, a bottom-up approach has important benefits to recom-mend it. First, even senior managers need to be concerned about risk at arelatively granular level. Not every day, not for every part of the portfolio,but there are times and places when a manager needs to be able to drilldown and examine the risk at a more detailed level. A risk-reporting processshould be like an onion or a matryoshka doll (babushka doll)—multiple lay-ers that can be peeled back to display risk at a more disaggregated level.Drilling down is a natural part of a bottom-up approach but often difficultto do in a top-down approach.

A second benefit of a bottom-up approach is that reporting built by ag-gregating lower-level risk can be more easily compared and reconciledagainst reporting used by those lower-level units. Reconciliation of risknumbers from disparate sources and using alternative methodologies canconsume considerable resources at a large firm. Such reconciliation is im-portant, however, because discrepancies can vitiate the usefulness of sum-mary risk reports—lower-level managers distrust the summary reportsbecause they do not match the risk they know from their daily managementof the risk, and top-level managers cannot access a reliable view of thelower-level risk when necessary.

Samp l e Por t f o l i o

I will consider a portfolio made up of four subportfolios (individual portfo-lio managers or trading desks):

& Government subportfolio& Long $20 million U.S. Treasury 10-year bond& Long £25 million U.K. Gilt 10-year& Short $20 million-notional call option on a 5-year U.S. Treasury

Portfolio Risk Analytics and Reporting 347

Page 367: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:40 Page 348

& Swaps subportfolio:& Short $20 million 10-year swap plus& Long outright $30 million U.S. Treasury exposure& Net result is long swap spreads and long some residual U.S. Treasury

exposure& Credit subportfolio:

& Long £55 million corporate bond spread (credit default swap or CDSon France Telecom)

& Equity subportfolio& Long D7 million CAC futures& Long D5 million French company (France Telecom)

This is not a large portfolio in number of positions, only seven or eight,but it is diverse and complex in terms of products and risk exposure. This isan example of quantitative risk measurement techniques starting to bringsome transparency to an otherwise complex and opaque situation.

The risks in this portfolio include:

& Yield risk& U.S. Treasury curve& U.K. Gilt curve& Swap curve or swap spread risk

& Volatility risk for call option& Credit risk

& Traded credit spread for the CDS and issuer risk for the equity& Counterparty risk for the interest rate swap and CDS

& Equity risk& Both index risk (exposure to the CAC, a broad market index) and

company-specific risk (France Telecom)& FX risk& Operational risk

& Processing for futures (remember Barings)& Processing and recordkeeping for IRS, CDS, and option& Delivery risk for bond and equities

& Model risk& IRS, CDS, call option

Here we will focus on market risk (yield, volatility, traded creditspread, equity, FX). The primary focus will be on the sample risk reportshown in Table 10.18. The report is intended to detail not just the levels butalso the sources of the portfolio’s risk exposure. In this case, there are onlyseven positions, and it might be possible to manage such a small portfoliowithout the risk reporting technology laid out here, but even here

348 QUANTITATIVE RISK MANAGEMENT

Page 368: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:40 Page 349

comparing and contrasting exposures across disparate asset classes and cur-rencies is not trivial.

Summary R i sk Repor t

Tables 10.18 and 10.19 show sample risk reports for this portfolio. This isbased on delta-normal or parametric estimation of the volatility and VaR.The report is the top-level report for the portfolio and summarizes the over-all exposure and major sources of risk. A good risk-reporting program,however, is a little like an onion or a set of Russian dolls—each layer whenpeeled off exhibits the next layer and shows more detail. This is the toplayer; I discuss more detailed reports in a following section, which parallelTables 10.18 and 10.19 but zero in on a specific subportfolio.

TABLE 10.18 Sample Portfolio Risk Report—Summary Report

Panel A—Expected Volatility by Asset Class

Exp Vol ($) ContributionCorrelationwith portfolio

Overall 616,900 100.0FI—rates 345,800 39.2 0.700FI—swap spreads 38,760 –0.4 –0.071Credit 65,190 2.8 0.265Equity 296,400 35.8 0.746FX 236,100 21.8 0.571Volatility 8,678 0.7 0.510

Panel B—Expected Volatility by Subportfolio

Exp Vol ($) ContributionCorrelationwith portfolio

Overall 616,900 100Credit 65,540 2.5 0.237Equity 312,100 39.0 0.771Government 376,000 51.6 0.847Swaps 75,350 6.9 0.562

Panel C—Volatility and 1-out-of-255 VaR

Volatility 616,900VaR Normal –1,640,000VaR Student t (6df) –1,972,000VaR Normal Mix (a = 1% b = 5) –1,680,000VaR 4-sigma rule-of-thumb –2,468,000

Portfolio Risk Analytics and Reporting 349

Page 369: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:40 Page 350

TABLE 10.19 Sample Portfolio Risk Report—Top Contributors and ReplicatingPortfolios Report

Panel A—Top Contributors to Risk (volatility)

Exp Vol

Curr

Position

Trade to

Best

% Reduction in

Volatility to

(1-sig P&L) Contribution (M eqv) Hedge (eqv) Best Hedge Zero Position

CACEqIndex 346,200 37.1 10.5 –12.4 25.0 24.4

GBPYld10yr 187,600 20.8 25.0 –56.2 27.1 17.8

USDYld10yr 202,900 20.6 31.0 –59.0 21.9 16.5

Top 1 negative

Exp Vol

Curr

Position

Trade to

Best

% Reduction in

Volatility to

(1-sig P&L) Contribution (M eqv) Hedge (eqv) Best Hedge Zero Pos’n

USDYld5yr 21,430 –2.1 –6.3 –107.1 19.3 –2.1

Top 3 Best Single Hedges

Exp Vol

Curr

Position

Trade to

Best

% Reduction in

Volatility to

(1-sig P&L) Contribution (M eqv) Hedge (eqv) Best Hedge Zero Position

GBPYld10yr 187,600 20.8 25.0 –56.2 27.1 17.8

CACEqIndex 346,200 37.1 10.5 –12.4 25.0 24.4

GBPYld5yr 548 –0.1 0.0 0.0 22.6 –0.1

Panel B—Best Replicating Portfolios

One Asset Three Assets Five Assets

% Var % Vol % Var % Vol % Var % Vol

%Var/%Vol

Explained46.8 27.1 86.7 63.6 98.4 87.5

Asset Eqv

Pos’n

Asset Eqv

Pos’n

Asset Eqv

Pos’n

Asset/Eqv

Pos’n

GBPYld10yr 56.2 GBPYld10yr 43.2 GBPYld10yr 26.1

CACEqIndex 8.9 CACEqIndex 11.9

GBPFX 18.6 GBPFX 19.4

FTEEqSpecific 6.1

USDYld10yr 24.1

350 QUANTITATIVE RISK MANAGEMENT

Page 370: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:40 Page 351

One note before turning to the analysis of the portfolio: most of mydiscussion is qualified with ‘‘probably,’’ ‘‘roughly,’’ ‘‘we can have reason-able confidence,’’ and other similar terms. This is quite intentional. Themeasures and statistics in any reports such as these are based on estimatesand past history. They are good and reasonable estimates, but anybodywho has spent time in markets knows that uncertainty abounds and oneshould always treat such reports, measures, and statistics carefully. Theyprovide a view into what happened in the past and what might happen inthe future, but the markets always provide new and unexpected ways tomake and lose money.

Volatility The first thing to note is the overall volatility: The daily orexpected volatility is around $616,900. We mean by this that the standarddeviation of the daily P&L distribution is roughly $616,900. When consid-ering the daily volatility, we are examining everyday trading activity andnot tail events, and so we can have some confidence that assuming normal-ity is probably reasonable. Using this, we can infer that the daily losses orprofits should be more than �$616,900 about one day out of three, basedon a normally distributed variable being below –1s or above þ1s withroughly 30 percent probability.

The observation on likely P&L immediately provides a scale for theportfolio. For example, if this were a real-money portfolio with capital of$10 million, we would expect gains or losses roughly 6.2 percent or moreof capital every three days—a hugely volatile and risky undertaking. On theother hand, if the capital were $500 million, we would expect a mere 0.1percent or more every three days, or roughly 2 percent per year (multiplyingby

ffiffiffiffiffiffiffiffi

255p

to annualize)—an unreasonably low-risk venture with probablycorrespondingly low returns.

The daily volatility gives a scale for the portfolio at a point in time, buteven more importantly provides a reasonably consistent comparison acrosstime. Were the daily volatility to rise to $1.2 million next week, we could bepretty confident that the risk of the portfolio, at least the risk under stan-dard day-by-day trading conditions, had roughly doubled.

The volatility also provides a reasonably consistent comparison acrossasset classes and trading desks. The report shows that the daily volatility forfixed income products (bonds and swaps) is about $346,000 and equity isabout $296,000. These statistics are the daily volatility of these productsconsidered in isolation: the P&L distribution of fixed-income productsalone has a volatility of about $346,000. The similar scale of risk in thesetwo products is valuable information, because there is no way to knowthis directly from the raw nominal positions: the notional in fixed income($20 million in U.S. Treasuries, £25 million in U.K. Gilts, $20 million in

Portfolio Risk Analytics and Reporting 351

Page 371: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:40 Page 352

swap spreads) is many times that in equities (D7 million in CAC futures,D5 million in France Telecom stock).

Volatility by asset class naturally does not sum to the overall volatility:the sum by asset class of $990,000 versus the overall of $616,900 shows theeffect of diversification.

VaR The next item to note is the daily VaR. The VaR is calculated at a0.4 percent level. This means the probability of a worse loss should be 0.4percent or 1 out of 250. The probability level for VaR is always somewhatarbitrary. In this case, 0.4 percent was chosen because it corresponds toroughly one trading day per year (1 out of 255). Such a value should not beconsidered an unusual event; in Litterman’s words (1996, 74): ‘‘Think ofthis not as a ‘worst case,’ but rather as a regularly occurring event withwhich [one] should be comfortable.’’

As with the volatility, the VaR provides a scale, in this case, the mini-mum loss one should expect from the worst day in a year. It is important toremember that this is the minimum daily loss one should expect from theworst trading day in the year. Purely due to random fluctuations, the actualloss may of course be worse (or possibly better) and there could be morethan one day in a year with losses this bad or worse.

Five values for the VaR are shown. The first is derived from the normal-ity assumption and is just 2.652 � the daily volatility—the probability thata normal variable will be 2.652s times below the mean is 0.4 percent.The second is based on an assumption that the overall P&L distribution isStudent t-distribution with six degrees of freedom. This allows for fattails—the Student t-distribution has the same volatility but fatter tails thanthe normal. The third is based on an assumption that each asset’s P&L dis-tribution is a mixture of normals (99 percent probability volatility ¼ sm, 1percent probability volatility ¼ 5sm), and again allows for fatter tails rela-tive to normal. The fourth is based on Litterman’s rule of thumb that a 4 �s event occurs roughly once per year, so that the VaR is just four times thevolatility. These four alternate values for VaR are useful and adjust for thepossibility that the distribution of market risk factors may have fat tails.

These VaR values should be used with care, more care indeed than thevolatility. One might want to examine whether assets such as those in thisportfolio have exhibited fat tails in the past, and whether and to what extentassets in the portfolio have generated skewed or fat-tailed distributions. Theestimates here are based on assumptions of normality for risk factors andlinearity for asset sensitivities (the estimates are delta-normal or parametric).The portfolio contains a put option that is nonlinear and will generate askewed P&L distribution. The delicate nature of estimating and using VaRestimates really argues for a separate report and more detailed analysis.

352 QUANTITATIVE RISK MANAGEMENT

Page 372: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:40 Page 353

In the end, I think the common-sense approach said to be used at Gold-man (Litterman 1996, 54) has much to recommend it: ‘‘Given the non-normality of daily returns that we find in the financial markets, we use as arule of thumb the assumption that four-standard deviation events in finan-cial markets happen approximately once per year.’’ Under normality, once-per-year events are only 2.65-standard deviations, so a 4-s rule of thumb issubstantially higher, as seen from the report.

Marginal Contribution to Volatility and Correlation The marginal contribu-tion to volatility is one of the most useful tools for decomposing and under-standing volatility and risk. Table 10.18 shows the MCP—proportional (orpercentage) marginal contribution—so terms add to 100 percent. The mar-ginal contribution by asset class shows that fixed income and equities arethe biggest contributors, each contributing roughly one-third of the risk. Be-cause portfolio effects are paramount but often difficult to intuit, the mar-ginal contribution is a better guide to understanding portfolio risk than isthe stand-alone volatility. In this simple portfolio, fixed income and equitieshave roughly the same stand-alone volatility and roughly the same contribu-tion, but for more complex portfolios, this will often not be the case.

The tables show a breakdown of marginal contribution by asset classand subportfolio. Depending on the institutional structure, different clas-sifications and breakdowns may be more useful. The table by asset classshows the risk for fixed-income instruments independent of where theyare held. The swaps desk holds some outright rate risk, as we shall see, sothat the volatility and contribution for swap spread and for the swaps deskitself are different. Examining the contribution by subportfolio shows thatthe government desk contributes most to the overall portfolio volatility.Much of the FX risk is held by the government desk (in the form of a par-tially hedged U.K. bond), and this leads to the large contribution by thegovernment desk.

Swap spreads actually show a small but negative contribution to theoverall volatility. The negative contribution does not mean that there is norisk in swap spreads—on a particular day, swap spreads may move in thesame direction as the rest of the portfolio, thus leading to larger gains orlosses, but it does give a reasonable expectation that over time the exposureto swap spreads will not add very much to the overall portfolio volatility.

The correlation of the swap rates with the full portfolio helps elucidatewhy swaps have a negative contribution. The correlation is slightly nega-tive, and so the swaps position hedges (slightly) the overall portfolio, andfor small increases, the swaps position hedges the overall portfolio. Turningback to the contribution and correlation by asset class, we see that equitiesare the most highly correlated with the portfolio, which explains why

Portfolio Risk Analytics and Reporting 353

Page 373: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:41 Page 354

equities contribute so much to the volatility even though the stand-alonevolatility is less than for fixed income.

Depending on the size and complexity of the portfolio, examining con-tribution to risk by individual assets may be useful. For a large and diverseportfolio, there will generally be many assets, and contributions by individ-ual assets should be left to a more detailed next reporting level, below thetop-level summary. For a smaller portfolio, examination of all assets isvaluable.

For most any portfolio, however, the top contributors provide usefulinsight into the portfolio. For this sample portfolio, the top three contribu-tors give a succinct summary of the major risks faced by the portfolio:equity index (CAC) and U.S. and U.K. yields. The top negative contributorshows those assets that reduce risk or hedge the portfolio. For this sampleportfolio. there is only one asset five-year U.S. yields—that has a negativecontribution.17

Best Single Hedges and Replicating Portfolios The marginal contributionsshow the contribution to risk for the existing portfolio and provides a guideto how the volatility will likely change for small changes in holdings. Butthe marginal contributions are not the best guide to the likely effect of largechanges in asset holdings, or what the best hedging assets might be. For this,the best hedges and replicating portfolios are useful.

For any particular asset, the best hedge position is that position whichminimizes the expected volatility. This involves a finite, possibly large,change in position. The top best hedge will often differ from the top mar-ginal contributor; for the sample portfolio shown in Table 10.18, the EquityIndex (CAC) is the largest marginal contributor but the second-top besthedge.

The top contributors and the top single hedges measure different char-acteristics of the portfolio. The top contributor to risk is the top contribu-tor, given the current positions. It tells us something about the compositionof the current portfolio. The best single hedge, in contrast, is that asset thatwould give the largest reduction in volatility if we bought or sold some largeamount. It tells us what would happen for alternate positions. We can alsotreat the best hedge as a mirror or replicating portfolio.

For the sample portfolio in Tables 10.18 and 10.19, the CAC EquityIndex is the top contributor, but GBP 10-year yields is the top best hedge.The GBP 10-year yields position is the best hedge because it is highly

17To my knowledge, Goldman Sachs pioneered the use of reporting top contributorsand they have trademarked the term Hot Spots for such a report—see Litterman(1996).

354 QUANTITATIVE RISK MANAGEMENT

Page 374: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:41 Page 355

correlated with USD 10-year yields, and together, these contribute 27 per-cent of the risk. A hedge using GBP 10-year will hedge both the existingGBP 10-year and the USD 10-year positions.

The top best hedge can be thought of as a replicating portfolio, in thesense that it is the single asset that best replicates the portfolio. For the GBP10-year yield, the trade from the current holding to the best hedge is a saleof 56 million pounds’ worth, which means that a buy of £56.2 millionwould be the best single-asset replicating portfolio. Such a replicating port-folio would explain 27.1 percent of the volatility.

Replicating portfolios can provide a useful proxy or summary of theactual portfolio, but the single-asset portfolio is often too simple. The three-asset and five-asset portfolios provide a much richer summary, and explainfar more of the portfolio volatility. The five-asset portfolio explains 87.5percent of the volatility and provides a valuable summary of the portfolio.The portfolio largely behaves like:

1. Long GBP 10-year yields (long 10-year bond, £26 million).2. Long CAC Equity index (D11.9 million).3. Long GBP FX (£19.4 million worth of FX exposure due to holding for-

eign currency bonds and equities).4. Long company-specific equity exposure (D6.1 million).5. Long U.S. 10-year yields ($24.1 million equivalent).

Repor t i ng f or Subpor t f o l i o s

The report in Tables 10.18 and 10.19 show the top-level summary for thefull portfolio. According to that report, the government portfolio contrib-utes almost half the risk to the overall portfolio. Someone managing theoverall risk needs the ability to drill down to examine the government port-folio in more detail. An effective way to drill down is to provide the samesummary information, plus additional detail.

Tables 10.20 and 10.21 simply mimic the top-level report shown inTables 10.18 and 10.19: expected volatility by asset class, top contributors,and top best hedges. In this case, the subportfolio is so simple—$20 millionin a U.S. Treasury, £25 million in a U.K. Gilt, and $20 million in an op-tion—that the summary is hardly necessary. (The replicating portfolios arenot shown because they are trivial—the portfolio itself only contains threepositions.) The summary does, nonetheless, show that the government port-folio incorporates both fixed-income risk (to changes in yields) and FX risk(due to a dollar-based portfolio holding a sterling-denominated bond).

Table 10.22 shows details by risk factor. For this subportfolio, the riskfactors are yields (par bond rates) and FX rates. The top panel shows the

Portfolio Risk Analytics and Reporting 355

Page 375: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:41 Page 356

contribution by risk factor. The holdings for this portfolio are U.S. Treas-uries and U.K. Gilts, and Table 10.22 shows that roughly one-third of therisk arises from each, with the balance resulting from the FX exposure ofholding a sterling bond in a dollar portfolio.

TABLE 10.20 Summary Report for Government Subportfolio

Panel A—Expected Volatility by Asset Class

Exp Vol ($) Contribution Correlation with portfolio

Overall 376,000 100.0FI—rates 280,400 62.6 0.840FI—swap spreads 0 0.0Credit 0 0.0Equity 0 0.0FX 204,600 36.6 0.672Volatility 8,678 0.8 0.352

Panel B—Expected Volatility by Subportfolio

Exp Vol ($) Contribution Correlation with portfolio

Overall 376,000 100.0CreditEquityGovernment 376,000 100.0 1.000Swaps

Panel C—Volatility and 1-out-of-255 VaR

Volatility 376,000VaR Normal –999,700VaR Student-t (6df) –1,202,000VaR Normal Mix (a = 1% b = 5) –1,024,000VaR 4-sigma rule-of-thumb –1,504,000

TABLE 10.21 Sample Portfolio Risk Report—Top Contributors

(1-sigP&L) Contribution (M eqv)

Hedge(eqv)

BestHedge

ZeroPosition

GBPYld10yr 187,600 41.5 25.0 –41.7 44.6 35.3GBPFX 204,600 36.6 18.0 –22.3 25.9 24.8USDYld10yr 130,800 24.8 20.0 –41.0 29.9 21.0

356 QUANTITATIVE RISK MANAGEMENT

Page 376: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:41 Page 357

The bottom panel shows the sensitivity of the portfolio to a 1-smove ineach particular risk factor. This is essentially the stand-alone volatility ofeach risk factor except that it is signed (positive if the P&L is positive inresponse to a downward move in rates, negative if otherwise).18 This pro-vides a very detailed or granular view of the portfolio, possibly too granularfor someone managing the full portfolio but necessary for someone manag-ing the details of the subportfolio. The risk is expressed as sensitivity to a1-s move instead of more traditional measures such as sensitivity to a 1bpmove or 10-year bond equivalents because the sensitivity to a 1-s moveallows comparison across any and all risk factors, asset classes, andcurrencies.

The sensitivity report is just the view of this simple portfolio that ismade up of:

& Long U.S. 10-year bond& Long U.K. 10-year bond

TABLE 10.22 Individual Position Report for Government Subportfolio

Contribution to Risk

Yield Curve USD GBP EUR

Yld2yrYld5yr –3.7 0.0Yld10yr 24.8 41.5Yld30yrFX 36.6 0.0

Sensitivity to 1-sig move

Yield Curve USD GBP EUR

Yld2yrYld5yr 21,430 0Yld10yr 130,800 187,600Yld30yrFX 204,600 0

18Note that the sensitivity for the 10-year U.S. yield position is the same as thestand-alone volatility of the 10-year bond discussed in the preceding chapter becausethe current portfolio is just an expanded version of that portfolio.

Portfolio Risk Analytics and Reporting 357

Page 377: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:41 Page 358

& Short U.S. option on 5-year bond& Long sterling from owning the U.K. bond

The value of examining the detailed report does not show up usingsuch a simple portfolio, but does when additional positions are added.Tables 10.23, 10.24, and 10.25 show the summary and detailed reports fora government subportfolio holding more, and more complex, positions.19

TABLE 10.23 More Complex Government Subportfolio—Summary Report

Panel A—Expected Volatility by Asset Class

Exp Vol ($) Contribution Correlation with portfolio

Overall 240,900 100.0FI—rates 125,800 31.6 0.606FI—swap spreads 0 0.0Credit 0 0.0Equity 0 0.0FX 192,100 68.4 0.858Volatility 8,678 0.0 –0.001

Panel B—Expected Volatility by Subportfolio

Exp Vol ($) Contribution Correlation with portfolio

Overall 240,900 100Credit 0 0.0Equity 0 0.0Government 240,900 100.0 1.000Swaps 0 0.0

Panel C—Volatility and 1-out-of-255 VaR

Volatility 240,900VaR Normal –640,600VaR Student t (6df) –770,200VaR Normal Mix (a ¼ 1% b ¼ 5) –656,100VaR 4-sigma rule-of-thumb –963,700

19 The additional positions are short $30M worth of USD and GBP 5-year bonds,long $60M worth of EUR 5-year bonds, and short $40M worth of EUR 10-yearbonds.

358 QUANTITATIVE RISK MANAGEMENT

Page 378: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:41 Page 359

TABLE 10.24 More Complex Government Subportfolio—Top Contributors andReplicating Portfolios Report

Panel A—Top Contributors to Risk (volatility)

Exp Vol CurrPosition

Tradeto Best

% Reduction inVolatility to

(1-sigP&L) Contribution (M eqv)

Hedge(eqv)

BestHedge

ZeroPosition

EURFX 232,900 55.4 22.7 –13.4 18.1 9.1GBPYld5yr 157,000 18.7 –30.0 13.2 4.2 –2.5GBPFX 174,800 13.0 –15.4 3.8 1.6 –12.5

Top 1 Negative

Exp Vol Curr

Position

Tradeto Best

% Reduction inVolatility to

(1-sigP&L) Contribution (M eqv)

Hedge(eqv)

BestHedge

ZeroPosition

USDYld10yr 130,800 –10.5 20.0 7.1 1.9 –22.7

Top 3 Best Single Hedges

Exp Vol CurrPosition

Tradeto Best

% Reduction inVolatility to

(1-sigP&L) Contribution (M eqv)

Hedge(eqv)

BestHedge

ZeroPosition

EURFX 232,900 55.4 22.7 –13.4 18.1 9.1GBPYld5yr 157,000 18.7 –30.0 13.2 4.2 –2.5USDYld5yr 123,600 12.5 –36.3 17.2 3.0 –0.7

Panel B—Best Replicating Portfolios

One Asset Three Assets Five Assets

% Var % Vol % Var % Vol % Var % Vol

%Var/%VolExplained

32.9 18.1 75.2 50.2 83.6 59.4

Asset EqvPos’n

Asset EqvPos’n

Asset EqvPos’n

Asset/Eqv EURFX 13.4 EURFX 23.8 EURFX 25.0Pos’n GBPFX –15.6 GBPFX –15.5

GBPYld5yr –6.1 GBPYld5yr –23.5GBPYld10yr 23.0EURYld10yr –11.8

Portfolio Risk Analytics and Reporting 359

Page 379: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:42 Page 360

The summary report shows

& Most of the risk is actually contributed by FX exposure.

The top contributors and replicating portfolios report shows

& Euro FX exposure is by far the largest contributor.& Euro FX is the only single hedge that reduces the portfolio volatility to

any extent.& Yields are important, but mainly in combination, as spreads.

The individual position report provides insight into what is producingthis pattern of exposures.

& In the United States and United Kingdom, short 5-year bonds versuslong 10-year bonds (a yield curve flattening position that benefits whenthe yield curve flattens).

& In Europe, long 5-year bonds and short 10-year bonds (yield curvesteepening position), in roughly the same size as the sum of the UnitedStates and United Kingdom taken together.

TABLE 10.25 Individual Position Report for Government Subportfolio, MoreComplex Portfolio

Contribution to Risk

Yield Curve USD GBP EUR

Yld2yrYld5yr 12.5 18.7 9.8Yld10yr –10.5 –8.9 10.0Yld30yrFX 13.0 55.4

Sensitivity to 1-sig move

Yield Curve USD GBP EUR

Yld2yrYld5yr –123,600 –157,000 223,200Yld10yr 130,800 187,600 –277,300Yld30yrFX –174,800 232,900

360 QUANTITATIVE RISK MANAGEMENT

Page 380: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:42 Page 361

10 .7 CONCLUS I ON

This chapter has focused on risk reporting and portfolio risk tools appliedto market risk. These tools help us understand the structure of the portfolioand how risks interact within the portfolio. All the examples are based onparametric estimation and delta-normal or linear approximations. Al-though many of the concepts (marginal contribution, for example) can alsobe applied when volatility is estimated by historical simulation or MonteCarlo, it is easiest to use these tools in a linear or delta-normal framework.

We now turn from our focus on market risk to considering credit risk.The fundamental idea remains—we care about the P&L distribution—butthe tools and techniques for estimating the P&L distribution will often bedifferent enough that we need to consider credit risk as a separatecategory.

APPEND IX 10 .1 : VAR I OUS FORMULAE FORMARG INAL CONTR I BUT I ON AND VOLAT I L I T I E S

Marg i na l Con t r i b u t i on f or Subpor t f o l i o s—Par t i t i o n i n g

The marginal contribution can be calculated not just for single assets butalso for groups of assets or for subportfolios. (See also Marrison 2002,142.) For the full portfolio, the weights are the column vector:

v ¼ v1 v2 . . .vn½ �0

This vector can be partitioned into multiple vectors :

va ¼ v1a v2a . . .vna½ �0

vb ¼ v1b v2b . . .vnb½ �0

. . .

vz ¼ v1z v2z . . .vnz½ �0with

v ¼ va þ vb þ � � � þ vz

Portfolio Risk Analytics and Reporting 361

Page 381: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:42 Page 362

These vectors can be formed into a matrix (which will have n rows andas many columns as partitions):

V ¼ va vb . . .vz½ �

The partition might take the form of grouping assets together, for exam-ple, grouping assets 1 and 2 in partition a and all other assets on their own:

va ¼ v1 v2 0 0 . . . 0½ �0

vb ¼ 0 0 v3 0 . . . 0½ �0

vb ¼ 0 0 0 v4 0 . . . 0½ �0

. . .

or it may take the form of subportfolios, so that the components v1a, . . .vna represent subportfolio a, v1b, . . . vnb represent subportfolio b, and soon, with the subportfolios adding to the total: v ¼ va þ vb þ� � �þ vz.

Whatever partition we use, the expressions

MCL ¼ V0 Sv½ �=sp ð10:4aÞ

MCP ¼ V0 Sv½ �=s2p ð10:4bÞ

will each produce a single-column vector with as many rows as partitions.Each element of this vector will be the marginal contribution for the corre-sponding partition.

This partition can also be used to calculate the stand-alone variance dueto each group of assets or subportfolio. The expression

diag V0SV� �

will give the stand-alone variances.

Vo l a t i l i t y f o r S i ng l e - Asse t Z ero Pos i t i o n

Equation (10.5) gives the portfolio volatility at the zero position for asset k as:

Volatility at asset k zero position ¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

v0Sv� 2vk Sv½ �k þ vkskkvk

q

ð10:5Þ

362 QUANTITATIVE RISK MANAGEMENT

Page 382: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:42 Page 363

The logic of this is that the zero position in k means vk ¼ 0, and in theoriginal expression for the variance, the row

Sv½ �k ¼X

jskjvj

gets multiplied by vk and thus must be zeroed out. This is accomplished bysubtracting it from the original variance. Also, because vk ¼ 0, the columnP

i vksik must be zeroed. By the symmetry of S, this will also be equal to[Sv]k, so we must subtract it twice. But this will duplicate the entryvk�skk�vk, so it must be added back once.

Vo l a t i l i t y f or S i n g l e - Asse t Bes t Hedge Pos i t i o n

Equation (10.6) gives the portfolio volatility at the best hedge position as:

Volatility at asset k best hedge position ¼ spðkÞ ¼

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

v0Sv� Sv½ �k� �2

skk

s

ð10:6ÞThis works because (considering the best hedge for asset k):

v0Sv ¼ v1 Sv½ �1 þ � � � þ vn Sv½ �n

v0Sv ¼ v1 Sv½ �1 þ � � � þ v

k Sv½ �k þ � � � þ vn Sv

½ �nBut [Sv]k ¼ 0, v

i ¼ vi for i 6¼k, so

v0Sv ¼ v1 Sv½ �1 þ � � � þ vk Sv½ �k þ � � � þ vn Sv½ �nNow the only element different between vi[Sv]i and vi[Sv

]i for each iis viski(vk – v

k), which taken all together is (vk – v

k)(Sv)k, so that

v0Sv ¼ v0Sv� vk Svð Þk þ vk Svð Þk

But note that

vk � vk ¼ vk þ Sv½ �k � vkskk

� �

=skk ¼ Sv½ �k=skk

so we end up with equation (10.6).

Portfolio Risk Analytics and Reporting 363

Page 383: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:42 Page 364

Vo l a t i l i t y f o r Mu l t i p l e - Asse t Bes t Hedge

The best hedge for two assets, j and k, vj and v

k are the solution to:

Sv½ �j ¼ 0

Sv½ �k ¼ 0

But

Sv½ �j ¼ Svð Þj � vjsjj � vksjk þ vj sjj þ v

ksjk

h i

¼ 0

Sv½ �k ¼ Svð Þk � vjsjk � vkskk þ vj sjk þ v

kskk

h i

¼ 0

Which means

Best Hedge

¼vj

vk

" #

¼ �sjj sjk

skj skk

" #�1

��½Sv�j þ vjsjj þ vksjk

�½Sv�k þ vjskj þ vkskk

" #

¼ �sjj sjk

skj skk

" #�1

�½Sv�j½Sv�k

" #

þ vj

vk

" #

ð10:7Þ

(Note that the expression for the mirror portfolio coefficients is essentiallythe least-squares normal equations. Calculating a mirror portfolio or repli-cating portfolio is effectively regressing the portfolio return against the se-lected assets.)

Equation (10.8) gives the portfolio volatility at the best hedgeposition as:

Volatility at asset j and k best hedge ¼ spðj&kÞ

¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

v0Sv� ½Sv�j ½Sv�k� �� sjj sjk

skj skk

� ��1

� ½Sv�j½Sv�k

� �

s

ð10:8Þ

The variance at the best hedge is:

v0Sv ¼ v1 Sv½ �1 þ � � � þ v

j Sv½ �j þ � � � þ v

k Sv½ �kþ � � � þ v

n Sv½ �n

364 QUANTITATIVE RISK MANAGEMENT

Page 384: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:42 Page 365

But [Sv]j,k ¼ 0, vi ¼ vi for i 6¼ j, k, so

v0Sv ¼ v1 Sv½ �1 þ � � � þ vj Sv½ �j þ � � � þ vk Sv½ �k

þ � � � þ vn Sv½ �n:

Now the elements different between vi[Sv]i and vi[Sv]i for each i is

visji(vj – vj ) þ viski(vk – v

k), which taken all together is (vj – vj )[Sv]j þ

(vk – vk)[Sv]k so that

Variance ¼ v0Sv ¼ v0Sv� ½Sv�j ½Sv�k� �� vj � v

j

vk � vk

� �

¼ v0Sv� ½Sv�j ½Sv�k� �� sjj sjk

skj skk

� ��1

� ½Sv�j½Sv�k

� �

Con t r i b u t i o n t o Vo l a t i l i t y , VaR , E xpec t edShor t f a l l

As discussed in the text, the properties of the marginal contribution to riskderive from the linear homogeneity of the risk measure and do not dependon the particular estimation method. As a result, the marginal contributionto risk can be calculated for volatility, VaR, or expected shortfall, using thedelta-normal, Monte Carlo, or historical simulation approach.

McNeil, Frey, and Embrechts (2002, equations 6.23, 6.24, and 6.26)give formulae for contributions for volatility, VaR, and expected short-fall. Repeating from the main text, say that the portfolio is made up ofinvestments in n assets, the P&L for one unit of asset i being denoted byXi, and the amount invested in asset i is vi. Then the total P&L is

P

iviXi,the Z% VaR is VaRz ¼ {Y s.t. P[

P

iviXi Y] ¼ Z}, and the expectedshortfall is ESz ¼ E[

P

iviXi jP

iviXi VaRz]. The contributions (in lev-els) are:

volatility: MCLi ¼ vi cov(Xi,P

kvkXk)/pvariance(

P

kvkXk)

VaR: MCLi ¼ vi E[ Xi jP

kvkXk ¼ VaRz ]

ES: MCLi ¼ vi E[ Xi jP

kvkXk VaRz ].

First, let us examine the formulae if the P&L distribution were normal.In this case, the contributions to volatility, VaR, and expected shortfall are

Portfolio Risk Analytics and Reporting 365

Page 385: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:43 Page 366

all proportional. Using the formulae in Section 8.1, we see that

MCLiðVaR;normalÞ ¼ vi covðXi ;X

kvkXkÞ= ffip

varianceX

kvkXk

h i

�F�1ðzÞ

MCLiðES; normalÞ ¼ vi covðXi ;X

kvkXkÞ= ffip

varianceX

kvkXk

h i

�f F�1ðzÞ� �

=z

In other words, the marginal contribution to VaR and expected short-fall are proportional to the marginal contribution to volatility. McNeil,Frey, and Embrechts (2005, 260) show that the proportionality for volatil-ity, VaR, and ES holds for any elliptical distribution (and any linearly ho-mogeneous risk measure).

We turn next to using Monte Carlo or historical simulation to estimaterisk measures (whether volatility, VaR, or expected shortfall) and contribu-tion to risk. Use a superscript q to denote a particular scenario, and con-tinue to use a subscript i to denote asset i. The formula for marginalcontribution to volatility will be:

volatility : MCLi ¼vi

P

qXqi

P

kvkXqk

� �

.

n� 1ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

P

q

P

kvkXqk

� �2.

n� 1

r

That is, the covariance and variance will (naturally) be estimated by theusual sum of cross products and sum-of-squares. (For notational conve-nience, I assume in that formula that all the Xi are measured as deviationsfrom means.)

For VaR:20

VaR : MCLi ¼ vi Xqi jX

kvkX

qk ¼ VaRz

h i

That is, we simply choose the appropriate scenario q that is the VaR (inother words, the scenario q that is the appropriate quantile, say the 50th outof 5,000 scenarios for the 1 percent/99 percent VaR). The contribution to

20Note that Marrison’s (2002, 143–144) method for estimating contribution to VaRfor Monte Carlo seems to apply to expected shortfall, not VaR, but see the followingfootnote.

366 QUANTITATIVE RISK MANAGEMENT

Page 386: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:43 Page 367

VaR for asset i is then simply the P&L for asset i from that scenario. (Thiswill clearly be additive, since

P

kvkXqk ¼ VaR.)

The problem with this is that the estimate for MCLi will have consider-

able sampling variability, since it uses only the single observation vi � [Xqi j

P

kvkXqk ¼ VaR]. There will be many possible combinations for the values

{X1, . . . , Xn} that all give the sameP

kvkXqk ¼ VaR, and thus many possi-

ble realizations for vi � [Xqi j

P

kvkXqk ¼ VaR].

To see the problem with using the P&L observation from a singlescenario as the estimate of the contribution to VaR, consider the followingsimple portfolio:

Two assets, X1 and X2.

Each normally distributed with mean zero and volatility s andcorrelation r.

Each with weight v ¼ 1/2.

The portfolio P&L will be the sum:

P ¼ 0:5X1 þ 0:5X2

with

s2p ¼ ð1þ rÞs2=2

We can write

X1 ¼ bPþ e

with

s2e ¼ s2ð1� rÞ=2 ; b ¼ 1

We can thus write

X1jP ¼ VaR ¼ Y½ � ¼ Z � N bY ; s2e

� �

The contribution to VaR (in levels) for asset 1 is

MCL1 ¼ v1E X1jP ¼ VaR ¼ Y½ � ¼ 0:5E X1jP ¼ VaR ¼ Y½ � ¼ 0:5Y

Portfolio Risk Analytics and Reporting 367

Page 387: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:43 Page 368

For Monte Carlo, the estimate of the contribution to VaR would be the ran-dom variable variable v1Z:

MCL1ðMonte CarloÞ ¼ v1 X1jP ¼ VaR ¼ Y½ � ¼ v1Z � NðbY;s2e Þ

We can calculate

P X1jP ¼ Y½ � < 2bY½ � ¼ P Z < 2bY½ � ¼ F bY=se½ �

and

P X1jP ¼ Y½ � > 0½ � ¼ P Z > 0½ � ¼ 1�F �bY=se½ �

These will be the probability that the estimated contribution in levels isbelow bY and above zero, respectively (remember Y will be negative, andthat b ¼ 1). For proportional marginal contribution, these two boundariesare proportional contribution above 1.0 or below 0.0 (the sign changes be-cause Y is negative). In other words, for Monte Carlo there will be a 24.6percent probability that the marginal contribution estimate is outside therange [0,1], when in fact we know the true contribution is 0.5. This is ahuge sampling variability.

Heuristically, the problem is that for a particular Monte Carlo simula-tion we cannot average over multiple scenarios since there is only one sce-nario for which

P

iviXqi ¼ VaR. To average over multiple observations for

a particular asset i (that is, to obtain multiple vi�Xqi for an asset i) we would

need to carry out multiple complete simulations, say indexed by m, takingone observation vi�Xqm

i from each simulationm.21

For expected shortfall:

ES: MCLi ¼ vi �X

kXq

i =m8q s:t:X

iviX

qi VaRz

where nowm ¼ no: of q s:t:X

iviX

qi VaRz

21Noting the result quoted in McNeil, Frey, and Embrechts (2005, 260) showing theproportionality of contributions for volatility, VaR, and expected shortfall for ellip-tical distributions, one strategy might be to estimate the proportional contributionfor volatility (or expected shortfall), then multiply by the VaR to obtain a contribu-tion (in levels) to VaR. This would justify Marrison’s (2002, 143–144) method but itis an ad hoc approach.

368 QUANTITATIVE RISK MANAGEMENT

Page 388: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:43 Page 369

APPEND I X B : ST EPW IS E PROCEDURE FORREPL I CAT ING PORTFOL I O

In Section 10.5, I laid out a simple procedure to build up a replicating port-folio by sequentially adding hedges. A more complex procedure, moreclosely analogous to stepwise regression, is to go back to consider earlierbest-hedge assets, one at a time, to ensure that they produce a greater reduc-tion in portfolio variance than the newest asset.

& Choose the first replicating portfolio asset as the volatility-minimizingsingle best hedge.& That is, calculate s

p(k) for all k. This is the best-hedge volatility forall one-asset best hedges or mirror portfolios.

& Choose as the first replicating portfolio asset, 1, the asset k, whichproduces the smallest s

p(k).& Choose the second replicating portfolio asset as that asset which, com-

bined with the first, produces the largest reduction in portfolio variance.& That is, calculate s

p(1 & k) for all k ¼ {all assets excluding the first

replicating portfolio asset}. This is the best-hedge volatility for alltwo-asset best hedges that include the first replicating portfolio asset.

& Choose as the second replicating portfolio asset, 2, the k for whichsp(1

& k) is the smallest (or the variance reduction s2p [1] – s2

p [1

& k] is the largest).& Choose as the third replicating portfolio asset that asset which, com-

bined with the first two, produces the largest reduction in portfolio var-iance, but also check that the earlier assets still produce a sufficientlylarge reduction in variance.& That is, calculate s

p(1 & 2 & k) for all k ¼ {all assets excluding the

first and second replicating portfolio assets}. This is the best-hedgevolatility for all three-asset best hedges that include the first two rep-licating portfolio assets.

& Choose as the third replicating portfolio asset, 3, the k for whichsp(1

& 2 & k) is the smallest (or the variance reduction s2p [1 &

2] – s2p [1 & 2 & k] is the largest).

& Go back and check the first two replicating portfolio assets to makesure they produce a large reduction in variance when combined in aportfolio.& Calculate s2

p (1 & 3) and s2p (2 & 3), the variances sequentially

excluding one of the earlier chosen replicating portfolio assets.& Compare the variance for these new potential two-asset portfolios

versus the already-chosen portfolio. That is, calculate: s2p (1 &

3) – s2p (1 & 2) and s2

p (2 & 3) – s2p (1 & 2).

Portfolio Risk Analytics and Reporting 369

Page 389: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:43 Page 370

& If either or both are negative, replace either 1 or 2 with 3,choosing the most negative if both are negative. (In reality, s2

p [1

& 3] > s2p [1 & 2] always because 2 is chosen to minimize the

portfolio variance when combined with 1.) Then go back andchoose a new third asset.

& Choose as the fourth replicating portfolio asset that asset which, com-bined with the first three, produces the largest reduction in portfoliovariance.& That is, calculate s

p(1 & 2 & 3 & k) for all k ¼ {all assets exclud-

ing the first, second, and third replicating portfolio assets}. This is thebest-hedge volatility for all four-asset best hedges that include thefirst three replicating portfolio assets.

& Choose as the fourth replicating portfolio asset, 4, the k for whichthe reduction from s

p(1 & 2 & 3) to s

p(1 & 2 & 3 & k) is the

largest.& Go back and check the first three assets to make sure they produce a

large reduction in variance when combined in a portfolio.& Calculate s2

p (1 & 2 & 4), s2p (1 & 3 & 4), and s2

p (2 & 3

& 4), the variances sequentially excluding one of the earlierchosen replicating portfolio assets.

& Calculate s2p (1 & 2 & 4) – s2

p (1 & 2 & 3), s2p (1 & 3

& 4) – s2p (1 & 2 & 3), and s2

p (2 & 3 & 4) – s2p (1 & 2

& 3).& If any are negative, replace the appropriate earlier asset with 4,

choosing the most negative if more than one are negative. Then goback and choose a new third asset.

APPEND IX C : PR INC I PAL COMPONENTSOVERV I EW

The factors obtained by principal components analysis are new random var-iables, which are linear combinations of the original variables:

F ¼ A0 � Y ðA10:9Þ

where A0 ¼matrix of linear transformation coefficients. This is n � n,where n is number of original variables (columns of A areeigenvectors).

F¼ column vector (1 to n) of factorsY¼ column vector of original variables (for example, yields)

370 QUANTITATIVE RISK MANAGEMENT

Page 390: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:43 Page 371

We want the variance-covariance matrix of the factors to be diagonal(so factors are uncorrelated):

E½F � F0� ¼ E½A0 � Y � Y0 � A� ¼ A0 � E½Y � Y 0� � A ¼ A0 � SY � A ðA10:10Þ

Principal components analysis sets the columns of the matrix A to theeigenvectors (characteristic vectors) of the variance-covariance matrix, withcolumns ordered by size of the eigenvalues. The eigenvectors are a conve-nient choice. They work because by the definition of the eigenvectors of thematrix SY:

22

SY � A ¼ A �Diagðl:Þ ðA10:11Þ

where Diag(l�) is the matrix with zeros off-diagonal and li in the diagonalelement (i,i). This diagonalization gives a diagonal matrix for the variance-covariance of the variables F, E[F�F0]:

E½F � F0� ¼ A0 � SY � A ¼ A0 � A �Diagðl:Þ ¼ Diagðs:Þ �Diagðl:Þ ¼ Diagðs:l:ÞðA10:12Þ

where SY¼ variance-covariance matrix of the original variablesA¼ eigenvectors of SY

li¼ eigenvalues of SY (ordered with largest first)si¼ chosen (but arbitrary) normalization constant for the eigenvec-

tors (so that A0�A ¼ Diag(s.))

The reverse transformation from factors to original variables is:

Y ¼ ðA0Þ�1 � F ðA10:13Þ

The matrix can easily be expressed in terms of the original A, using

A0 � A ¼ Diagðs:Þ

to get

ðA0Þ�1 ¼ A �Diagð1=s:Þ

22Assuming that the variance-covariance matrix is full rank.

Portfolio Risk Analytics and Reporting 371

Page 391: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:44 Page 372

giving

Y ¼ A �Diagð1=s�Þ � F

or

F ¼ A0 � ðA0Þ�1 � F ¼ A0 � Y

One well-known result from principal components (eigenvectors) isthat it provides a decomposition of the total variance, defined as tr(SY):

tr SYð Þ ¼ l1 þ � � � þ ln

That is, the eigenvalues sum to the total variance (sum of variances),and since the eigenvectors are orthogonal, the components are orthogonalcomponents that explain the total variance. Traditionally, eigenvectors/eigenvalues are sorted from largest to smallest, so that the first eigenvectoraccounts for the largest proportion of the total variance, the second for thesecond-largest, and so on.

In looking at a portfolio, however, we are generally less concerned withthe sum of the variances (diagonals of the variance-covariance matrix) andmore concerned with the volatility or variance of the portfolio, which is acombination of the components of the variance-covariance matrix. It turnsout that here the diagonalization in equations (A10.11) and (A10.12) is alsovaluable.

The portfolio variance is the quadratic form:

D0 � SY � D

Premultiplying D by A–1 will allow us to diagonalize and decompose theportfolio variance into a sum of independent principal components:

D0 � SY � D ¼ D0 � A0�1 � A0 � SY � A � A�1 � D¼ D0 � A0�1 � SF � A�1 � D ¼ D0 � A0�1 � diag s:l:ð Þ � A�1 � D¼ D0 � A � diagð1=s:Þ � diagðs:l:Þ � diagð1=s:Þ � A0 � D¼ ðD0 � AÞ � diagðl:=s:Þ � ðA0 � DÞ¼ D0 � FL � FL0 � D ¼ P

d2i li=si

ðA10:14Þ

where di ith component of D0 � A

372 QUANTITATIVE RISK MANAGEMENT

Page 392: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:44 Page 373

This is a really valuable decomposition:

1. The term A � diag(pli/si) in the third line is an n � n matrix, which wecan call FL, the factor loading. Column i gives the change in the originalyields due to a 1-s move in principal component or factor i (the factorloading for factor i).

2. The term D0 � A � diag(pli/si) (or D0 � FL) is a row vector. Element iprovides the P&L because of a 1-smove in principal component or fac-tor i.

3. The full expression (D0 � A) � diag(li/si) � (A0 � D) is the portfolio P&Lvariance. It is the dot product of the vectors D0 � A � diag(pli/si) anddiag(

pli/si) � (A0 � D), and so is the sum-of-squares of the P&L, resulting

from 1-s moves in principal components. As a sum-of-squares itdecomposes the overall portfolio variance into a sum of componentsdue to separate, uncorrelated, principal components.

In other words, when we work with the principal components, we dohave a simple additive decomposition of the overall variance into elementsdue to the separate principal components. This is in stark contrast to work-ing with the original variables, where the overall variance does not have anyadditive decomposition—the best we can do is an additive decomposition ofthe infinitesimal change in the volatility, the marginal contribution dis-cussed earlier.

As mentioned before, the eigenvectors are determined only up to an ar-bitrary constant (the si). There are three convenient choices:

1. si ¼ 1. This is the standard normalization (used, for example, byMatLab and Gauss). This gives E[F�F0]¼ diag(li) and A0–1¼ A, A0 ¼ A–1.

2. si ¼ 1/li. This gives E[F�F0] ¼ I.3. si ¼ li. This means a change in Y due to 1-s move in Fi (factor loading)

is given by the matrix A, so this can be read directly from the columnsof A.

Examp l e

Consider 2-year and 10-year rates. Assume that the rates have the followingvolatilities:

ln vol rate bp vol bp vol daily

2-yr 20% 5% 100 6.26210-yr 15% 5% 75 4.697

Portfolio Risk Analytics and Reporting 373

Page 393: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:44 Page 374

If the correlation between 2-year and 10-year rates is 80 percent, thenthe variance-covariance matrix will be (measured in daily basis points):

Sy ¼ 39:22 23:5323:53 22:06

� �

The eigenvectors are:

A ¼ 0:8193 �0:57340:5734 0:8193

� �

The eigenvalues are 55.69 and 5.595. (The eigenvectors are calculatedusing the normalization A0A ¼ I or si ¼ 1.) The change in Yi due to a1 � s change in the new factors or the factor loading FL are given by thecolumns of A � diag(pli/si), in this case A � diag(pli). These are the columnsin the following matrix:

6:11 �1:364:28 1:94

� �

This is roughly the parallel (up 6.11 and 4.28 for 2-year and 10-year)and twist (–1.36 and þ1.94 for 2-year and 10-year) factors commonlyfound as the first two factors for yield curve movements.

The sum of rate variances is 61.27 (the sum of the diagonals of thevariance-covariance matrix, 39.22 þ 22.06) and the first componentaccounts for 90.9 percent of this. This is the standard result for principalcomponents analysis.

More interesting for the current context is to examine the principalcomponents analysis applied to a portfolio. For this, we need the portfoliosensitivity D so that we can calculate terms such as D0 � A � diag(pli/si). Nowassume that the portfolio sensitivity was

D ¼ �12

� �

That is, when 2-year rates increase by 1 bp, the P&L is –1, while when10-year rates increase by 1 bp, the P&L is þ2. This can be translated intosensitivities to the principal components using:

P&L due to a 1-sigma move in components

¼ D0 � A0�1 � diag ffipsili

¼ D0 � A � diag ffipli=si

374 QUANTITATIVE RISK MANAGEMENT

Page 394: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:44 Page 375

w:r:t 2yr=10yr w:r:t: components

D ¼ �12

� �

2:445:23

� �

The overall portfolio variance is 33.32 and decomposes into 33.32 ¼5.97 þ 27.35 (¼ 2.442 þ 5.232). In contrast to the sum of yield variances(diagonals of the yield variance-covariance matrix), where the first factor ismost important, only 5.97 (out of 33.32) or 17.9 percent of the portfoliovariance is accounted for by the first factor while the second factor accountsfor 82.1 percent. This can be seen directly from the P&L due to 1-sigmamoves. Since the factors are orthogonal, the portfolio variance is the sum ofthe component variances (covariance terms are zero):

Total Variance ¼ 33:32 ¼ 2:442 þ 5:232

Portfolio Risk Analytics and Reporting 375

Page 395: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C10 02/15/2012 13:42:44 Page 376

Page 396: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:5 Page 377

CHAPTER 11Credit Risk

11 .1 INTRODUCT I ON

Credit risk is ubiquitous in modern finance: ‘‘Credit risk is the risk that thevalue of a portfolio changes due to unexpected changes in the credit qualityof issuers or trading partners. This subsumes both losses due to defaults andlosses caused by changes in credit quality’’ (McNeil, Frey, and Embrechts2005, 327).

In many ways, the analysis of credit risk is no different from risk arisingin any other part of a firm’s business. The focus is on the distribution ofgains and losses (P&L) and how information about gains and losses can beused to manage the business of the firm.1

Although the underlying idea is simple, particular characteristics ofcredit risk mean that the techniques used to estimate and analyze credit riskare often different and more complex than for market risk. The distributionof P&L can be very difficult to estimate for a variety of reasons (see McNeil,Frey, and Embrechts, 2005, 329):

& Most credit risks are not traded and market prices are not available, sothe distribution of gains and losses must be constructed from first prin-ciples, requiring complex models.

1 An example of using the P&L distribution in managing a business would be theCFO of a bank setting the following (cf. Marrison 2002, 229):

& Provisions—expected losses over a period—the mean of the distribution.& Reserves—loss level for an unusually bad year—may be set at the 5 percent

quantile (VaR) of the loss distribution.& Capital (also termed economic capital to distinguish it from regulatory capi-

tal)—loss level for an extraordinarily bad year, required to ensure a low proba-bility of default—may be set at the 0.1 percent or 0.03 percent quantile (VaR) ofthe loss distribution.

377

Page 397: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:5 Page 378

& Public information on the quality and prospects for credit risks areoften scarce. This lack of data makes statistical analysis and calibrationof models problematic. (Additionally, the informational asymmetriesmay put the buyer of a credit product at a disadvantage relative to theoriginator.)

& The P&L distribution for credit risks is often skewed, with fat lowertails and a relatively larger probability of large losses. Such skewness isdifficult to measure but particularly important because the economiccapital required to support a portfolio is sensitive to exactly the proba-bility of large losses—the shape of the lower tail drives the economiccapital.

& Dependence across risks in a portfolio drives the skewness of thecredit risk distribution, but dependence is difficult to measure withaccuracy.

Credit risk modeling has the same underlying goal as market risk analy-sis—build the distribution of P&L over some horizon and use that distribu-tion to help manage the business activity. For market risk, the distributionof P&L can usually be measured directly from the market, by looking athistory. For credit risk, in contrast, the distribution must often be builtfrom scratch, using limited data and complicated models, each with theirown specialized methodology and terminology.

Var i e t i e s o f Cred i t R i s k

Credit risk shows up in many areas and pervades modern finance. This sec-tion provides a brief overview of the main instruments and activities wherecredit risk shows up.

The standard approach to credit risk traces back to commercial banksand their portfolios of loans (cf. Jorion 2007, 454–455). It is easy to see thata major risk, indeed the dominant risk for a loan, is that the issuer willdefault: credit risk in its quintessential form. Although credit risk analysisand modeling may have originated in banks to cover loans, credit exposureactually permeates finance:

& Single-issuer credit risk such as for loans and bonds. The default of theissuer means nonrepayment of the principal and promised interest onthe loan or bond.

& Multiple-issuer credit risk such as for securitized mortgage bonds. Suchbonds are issued by a bank or investment bank but the underlying assetsare a collection of loans or other obligations for a large number of

378 QUANTITATIVE RISK MANAGEMENT

Page 398: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:5 Page 379

individuals or companies. Default of one or more of the underlyingloans creates credit losses.

& Counterparty risk resulting from contracts between parties, often over-the-counter (OTC) derivatives contracts. OTC transactions, such asinterest rate swaps, are contracts between two parties, and if one partydefaults, it may substantially affect the payoff to the other party. Othercontracts, such as letters of credit, insurance, financial guarantees, alsoentail counterparty credit risk because there is potential for loss upondefault of one party.

& Settlement risk. Associated with delivery and settlement of trades, thepossibility that one side fails to settle a trade after being paid.

Da t a Cons i d era t i o ns

Credit risk is as much about data as it is about quantitative tools and analy-sis. One of the biggest challenges in the practical implementation of a creditrisk system is the basic task of developing an effective database of bothexternal and internal data.

The data required to analyze credit risk falls into two broad categories.First, what might be termed external data, cover the credit quality and pros-pects of counterparties and other credit exposures. As mentioned earlier,public information on credit quality, indeed on all aspects of a counter-party, are often difficult to acquire and so make statistical analysis andcredit modeling difficult.

The second category is what might be termed internal data, internal tothe firm: the details concerning exactly who are a firm’s counterparties andother credit exposures. Collecting, collating, cleaning, and using these inter-nal data is often challenging. Such internal data are under the control of thefirm, and so it is often assumed that it is accessible. Unfortunately, such dataare often scattered throughout different units of an organization, in separatelegacy systems, collected and stored for reasons unrelated to credit risk anal-ysis, and all too often difficult to access and unusable in the original form.These internal data can be intrinsically complex and difficult to collect.

As an example of the potential complexity of internal data, consider afirm’s possible exposure before Lehman Brothers’ collapse in 2008. Oneunit might hold a Lehman bond, another hold an OTC interest rate swapwith Lehman, and a third might be settling an FX trade through Lehman asprime broker. All of these are at risk when Lehman goes into bankruptcy.Simply collecting information on the existence of such disparate exposuresis not trivial, particularly given their heterogeneity in terms of duration,liquidity, and complexity of underlying assets.

Credit Risk 379

Page 399: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:5 Page 380

11 .2 CRED I T R I SK VERSUS MARKET R I SK

Earlier chapters focused primarily on market risk, so it is useful to highlightsome differences between credit risk and market risk. The difference centeraround specific issues: first, the time frame over which we measure the P&Ldistribution—it is much longer for credit risk; second, the asymmetry andskew of the P&L distribution—credit risk leads to highly skewed distribu-tions; third, the modeling approach—P&L for credit risks must usually bemodeled from first principles rather than from observed market risk factors;finally, data and legal issues become relatively more important. We revieweach of these briefly before turning to a simplified model of credit risk.

L i q u i d i t y and T ime Frame f or Cred i t v ersusMarke t R i s k

Although the P&L distribution is the primary focus for both market riskand credit risk, the time frame over which P&L is evaluated is often subs-tantially longer for credit risk than for market risk. This is primarily a resultof the illiquidity of most credit products. Credit products, loans being theclassic example, have traditionally not been traded, and institutions haveheld them until maturity. Furthermore, credit events tend to unfold over alonger time horizon than market events. Information on credit statuschanges over weeks and months, not minutes and hours as for market vari-ables. For these reasons, measuring P&L for credit risk over a period ofdays or weeks is usually inappropriate because there is no practical possibil-ity that the P&L could be realized over such a short period, and in manycases could not even be realistically measured over the period.

One result of considering a much longer time period for the P&L distri-bution is that the mean matters for credit risk while it generally does not formarket risk. For market risk, distributions that are measured over days, thevolatility of market returns swamps the mean. For credit risk, in contrast,the P&L distribution is often measured over one or more years, and oversuch a long period, the mean will be of the same order as the volatility andmust be accounted for in using any summary measures, whether VaR orother.

Asymmetry o f Cred i t R i s k

The distribution of P&L for credit risks will often be asymmetric, highlyskewed with a fat lower tail. Figure 11.1 shows results for a stylizedmodel for the returns from a simple loan portfolio (discussed in more detailfurther on).

380 QUANTITATIVE RISK MANAGEMENT

Page 400: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:5 Page 381

It is often argued that the distribution of credit risk is asymmetric be-cause a credit portfolio will have many small gains and a few large losses,for example, due to infrequent defaults on loans causing a complete loss ofprincipal. In fact, there are more fundamental reasons for asymmetry incredit risk. The idea of small gains versus large losses certainly applies, butcannot be the whole story, as we will see in Section 11.3. Dependence acrossdefaults, for example, defaults clustering during times of general economicstress, is a prime candidate for why defaults, and credit risks generally,exhibit asymmetry.

Whatever the cause of asymmetry or skewness, it is more prevalent incredit risk than market risk. This makes credit risk inherently more difficultto measure than market risk. The degree of skewness will have a particu-larly large impact on the lower tail, and since the degree of skewness is hardto determine exactly, the lower tail will be difficult to measure. Since it isexactly the lower tail that is of most interest in credit risk, for example, in

Probability

0.08

0.07

0.06

0.05

0.04

0.03

0.02

0.01

05654 5749 53 615547 51 59 63

Income ($ thousands)

FIGURE 11.1 P&L Distribution for a Simple Model of a Loan PortfolioNote: This is the one-year income (in dollars) from holding a portfolio of 1,000 ho-mogeneous loans of face value $1,000, each with average probability of default of0.01 and a default correlation across loans of 0.4 percent (roughly representative ofBB-rated loans). Loss given default is 50 percent, promised interest income is $65.The model is discussed in Section 11.3, together with the specifics of the dependencestructure. Reproduced from Figure 5.15 of A Practical Guide to Risk Management,# 2011 by the Research Foundation of CFA Institute.

Credit Risk 381

Page 401: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:6 Page 382

determining the reserves or economic capital required to sustain a business,asymmetry and skewness make the assessment of credit risk more complexthan market risk.

Cons t ruc t i v i s t ( Ac t uar i a l ) v ersus Marke tApproach t o Mode l i ng t he P&L D i s t r i b u t i o n

Market risks are, by their very nature, actively traded in the market. Theavailability of market prices means that estimates of the distribution ofP&L over a given time horizon (usually short) can usually be derived fromobserved market risk factors, market prices, yields, rates. Although there isconsiderable thought and effort directed toward estimation of the appropri-ate distribution (for example, ensuring that the tails are appropriately meas-ured), the distribution is invariably based on observed market prices, andmarket risk is solidly grounded in market pricing.

In contrast, credit risks are often not actively traded, so market pricesare not available and the distribution of P&L cannot be taken from ob-served prices. As a result, the P&L from credit-related products must beconstructed from a granular model of the fundamental or underlying causesof credit gains and losses, such as default, ratings changes, and so on. I callthis a constructivist approach to modeling the distribution of P&L.2

The contrast between the market-based approach used for market riskand the constructivist approach applied to credit risk is a primary distin-guishing characteristic of market risk versus credit risk. Much of thecomplication surrounding credit risk is a result of the necessity of buildingthe distribution from first principles, constructing the distribution fromunderlying drivers.

While the approach taken to modeling market and credit risks are dif-ferent, this arises not from a fundamental difference between market andcredit risk but rather from the type of information available. The distribu-tion of P&L for IBM, for example, could be constructed from a fundamen-tal analysis, considering IBM’s market position, pipeline of new products,financial position, and so on. In fact, this is what equity analysts do to makestock recommendations. In risk measurement, there are many reasons forusing market prices for IBM rather than constructing the P&L based onunderlying variables. Probably the best reason is that the market price dis-tribution incorporates the estimates of a multitude of investors and traders

2 I would call this a structural approach except that McNeil, Frey, and Embrechts(2005) have used the term to highlight a useful distinction between types of creditrisk models, as can be seen in Section 11.4

382 QUANTITATIVE RISK MANAGEMENT

Page 402: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:6 Page 383

regarding IBM’s future prospects—there needs to be a pretty strong reasonto ignore market prices.

In a perfect world in which the distribution of future outcomes can beappropriately modeled, the constructivist approach will give the same an-swer as the market or price approach. In practice, credit modeling must of-ten take the constructivist approach because the underlying risk is nottraded and market prices are not available.

Da t a and Lega l I s sues

Data issues were touched on earlier—credit risk involves substantial de-mands for both external and internal data. The data for credit risk are oftenlow frequency (monthly, quarterly, annual) versus the high frequency datacommon for market risk, but collecting and collating data is often difficultbecause public data on credit risks are often not available.

By legal issues, I mean matters such as the legal organization of counter-parties, details of contracts (netting, collateral), or priority and venue in theevent of bankruptcy. Such issues generally do not matter for market risk.Market risks usually depend on changes in prices of standardized securitiesrather than arcane details of legal contracts. In contrast, legal matters areparamount when considering credit risk: exactly what constitutes a default,and how much is recovered upon default, critically depends on legal details.

11 .3 STYL I Z ED CRED I T R I SK MODEL

I n t roduc t i o n

My treatment of credit risk and credit modeling diverges from the approachusually taken in risk management texts. The current section lays out a styl-ized model to provide a framework for understanding how credit riskmodels are used. Section 11.4 provides a taxonomy of models (largely fol-lowing McNeil, Frey, and Embrechts 2005, ch. 8 and ch. 9). Section 11.5then briefly discusses specific models (Merton’s [1974], KMV, CreditMet-rics, CreditRiskþ) and puts them into context, using the stylized model ofthe current section.

Most treatments, in contrast, start with a discussion of what is creditrisk, industry practice for analyzing and categorizing credit risk, anddetailed description of one or more specific credit models, such as Merton’s(1974) option-theoretic model of default or industry-developed models suchas KMV or CreditMetrics. I will refer to other texts for background on ac-tual credit risk practice and models. Crouhy, Mark, and Galai (2000, ch. 7

Credit Risk 383

Page 403: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:7 Page 384

through 12) is a particularly good review of banking industry practice andmodels. (Crouhy, Mark, and Galai 2006, ch. 9 through 12 provides a some-what less detailed overview.) McNeil, Frey, and Embrechts (2005, ch. 9 andparticularly ch. 8) provides a good treatment of the technical foundationsfor credit models. Marrison (2002, ch. 16 to 23) also has an extensive dis-cussion of industry practice and modeling, with chapter 17 providing a par-ticularly nice overview of the variety of credit structures that a bank faces.Duffie and Singleton (2003) is a more advanced and technical reference.

The aim of the present section is to demonstrate the characteristics ofcredit risk modeling, not to build a realistic credit model. One importantaim of this section will be to point out that the concept behind credit riskmodels is simple, but also to explain why realistic models are complex anddifficult to build.

S ty l i z e d Cred i t R i sk Mode l

The stylized model is built to analyze a particular portfolio, a portfolio thatcontains 1,000 identical loans. The time horizon over which we measure theP&L distribution is one year, as we wish to determine an appropriate levelof annual reserves. One year happens to be the same as the loan maturity.The loans are made to a variety of businesses, but all the businesses have thesame credit quality so that the chance of default or other adverse event is thesame for each loan, and the chance of default for a single loan is 1 percent.All the businesses are assumed to be independent of the other loans, sodefaults are independent. If a loan defaults, there is virtual certainty thatrecovery, from liquidation of the business or assets held as collateral, willbe 50 percent of the loan’s face value.

These characteristics are summarized in Table 11.1.

TABLE 11.1 Characteristics of Loans, Credit Analysis, and Credit Quality

Loans Credit Quality Output

$1,000 initial investment All identical credit quality Require 1yr P&Ldistribution

One year final maturity Recovery upon default is50 percent

Promised interest at year-end: 6.5 percent

Probability of default of anindividual loan: 0.01

Individual loans independent

Reproduced from Exhibit 5.4 of A Practical Guide to Risk Management,# 2011 bythe Research Foundation of CFA Institute.

384 QUANTITATIVE RISK MANAGEMENT

Page 404: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:7 Page 385

The initial portfolio value is $1 million. The value in one year dependson the repayment and default experience. If an individual loan is in goodstanding, the repayment is $1,065 (income $65). If a loan defaults, the re-covery is $500 and the loss is $500. These payments are shown schemati-cally in Figure 11.2.

The actual income (less initial investment) is:

Actual Income ¼ nondefaults� $1;065þ defaults� $500� 1;000� $1;000

We know that the average probability of defaults is 1 percent, so, on aver-age, 10 loans will default. Thus the average actual income will be:

Average Actual Income ¼ 990� $1;065þ 10� 500� 1;000� $1;000¼ $59;350

Beyond the average performance, we need to know how the portfolio islikely to behave in adverse circumstances, and how much the bank makingthe loans should set aside in reserves to cover the contingency that moreloans than expected go into default. We can answer such a question if weknow the full distribution of the P&L.

Before turning to the solution of this model, let me highlight a criticalassumption: the independence of loans across borrowers. Loans are as-sumed to be independent and there is no correlation across borrowers(no change in probability of default because other borrowers do or do notgo into default). Furthermore, the probability of default does not changewith conditions in the economy or other factors—the probability is indeedconstant at 0.01 for every borrower.

B. PortfolioA. Individual Loans

Repaymentno default: $1,000 + $65default: recovery of $500

$1,000 Loan Amount

Final Valueno. nondefaults × ($1,000 + $65)no. defaults × $500

$1,000,000 Portfolio Investment

FIGURE 11.2 Schematic of Initial Investment and Final Repayment of IndividualLoans and Overall PortfolioReproduced from Figure 5.16 of A Practical Guide to Risk Management,# 2011 bythe Research Foundation of CFA Institute.

Credit Risk 385

Page 405: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:8 Page 386

Under this assumption, the distribution of defaults is actually very sim-ple: a binomial distribution, since the outcome for each of the 1,000 loans isa Bernoulli trial, default (probability 0.01) versus not-default (probability0.99). The probability of having k defaults out of 1,000 firms is (by the bi-nomial distribution):

P kdefaults½ � ¼ 1;000k

� �

0:01k0:991000�k

Figure 11.3 shows the distribution of defaults in Panel A and the distri-bution of income in Panel B.

A couple of points with respect to the distribution of defaults: first, thedistribution for this situation is easy to write down analytically, but that isnot generally the case. Simulation is usually necessary. Simulation would beeasy in this case: simply draw 1,000 uniform random variables (rv) between0 and 1, compare each with the probability of default (0.01). If the rv isabove 0.01, the firm does not default; if below 0.01, the firm does default.Simulation in more complex cases is similar, and often very simpleconceptually.

The second point to note is that the distribution of losses and income issymmetric. This is hardly surprising given the well-known result that thebinomial distribution converges to the normal for large n, and n ¼ 1,000 islarge. It does, however, demonstrate that distributions from credit risk arenot of necessity asymmetric and that asymmetry does not necessarily arisefrom ‘‘small gains, large losses.’’ The portfolio has many small gains($65 for each of roughly 990 performing loans) and a few large losses($500 for each of roughly 10 nonperforming loans) but the distribution isstill symmetric; it is not simply small gains and large losses that produce anasymmetric distribution. Credit loss distributions are indeed often asym-metric, but usually due to dependence in defaults across firms. We return tothis and consider the asymmetry of the distribution and alternative depen-dence assumptions further on.

Using the distribution displayed in Figure 11.3, we could provide somereasonable answers to questions regarding how much we might lose in ad-verse circumstances, but these questions will be delayed for a short time.

Cred i t R i s k Mode l i n g—S imp l e Concep t ,Comp l ex Execu t i o n

This model is very simple, but it does contain many, even most, character-istics of more realistic credit risk models. (More accurately, static ordiscrete-time models as discussed in the taxonomy in Section 11.4.) There

386 QUANTITATIVE RISK MANAGEMENT

Page 406: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:8 Page 387

Defaults

Income ($ thousands)

Probability

0.14

0.06

0.08

0.10

0.12

0.04

0.02

0520 155 10 20

A. Number of Defaults

Probability

0.14

0.06

0.08

0.10

0.12

0.04

0.02

050 54 58 6252 56 60 64

B. Income

FIGURE 11.3 Number of Defaults for Portfolio of 1,000 Homogeneous LoansNote: Panel A: The number of defaults for a portfolio of 1,000 homogeneous loans,each with probability of default of 0.01. This is a binomial distribution with 1,000trials, probability of default 0.01. Panel B: The one-year income from holding such aportfolio with loss given default 50 percent, promised interest 6.5 percent.Reproduced from Figure 5.17 of A Practical Guide to Risk Management,# 2011 bythe Research Foundation of CFA Institute.

Credit Risk 387

Page 407: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:8 Page 388

are four risk factors that contribute to credit risk, and this model highlightsthree (correlation is discussed further on):

1. Default—Probability that counterparty defaults and some or all of thevalue is lost. Termed probability of default (PD) or expected defaultfrequency (EDF).3 (In this example, the probability of default is 0.01.)

2. Correlation—Dependency across firms in default probability. As dis-cussed further on, this has a huge impact on the shape of the distribu-tion of credit losses. (In this example, the correlation is zero.)

3. Exposure—The amount the firm has exposed to a particular counter-party or is at risk to a particular credit, also termed the exposure atdefault (EAD). (In this example, the exposure for each loan is $1,000.)

4. Recovery—The amount recovered upon default, since rarely is thewhole amount lost. Also expressed as the loss given default (LGD),where recovery ¼ 1 – LGD. (In this example, recovery is 50 percent or$500 out of $1,000 investment.)

Technically, credit risk depends on the three factors: default, recovery,and exposure (see, for example, Jorion 2007, 454–455). Loss amount is theproduct of these:

Dollar Loss ¼ L ¼ e�ð1� dÞ�Y

where Y¼ default indicator ¼ 1 if default occurs and ¼ 0 if no defaultd¼ percentage recovery (in the current model, 50 percent)e¼ exposure, the dollar amount at risk if default occurs (in the cur-

rent model, the loan amount, $1,000)

Correlation or dependence across defaults is simply a characteristic ofthe joint default probability and is subsumed under the default factor.Nonetheless, dependence across defaults is such an important element, onethat has such a huge impact on the shape of the distribution of defaults in aportfolio context, that I include it as a risk factor in its own right. It is par-ticularly important to highlight it alongside default because the primaryfocus when estimating default probability is often on a firm in isolation (themarginal probability) rather than the dependence across firms (the jointprobability).

3More generally, this would be extended to include more general transitions be-tween credit states, with the transition from solvent to default being a simple specialcase. This is discussed more in the section titled ‘‘Credit Migration andCreditMetrics.’’

388 QUANTITATIVE RISK MANAGEMENT

Page 408: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:9 Page 389

The current stylized model highlights the simple concept behind creditrisk models (more precisely static or discrete-time models). This particularmodel makes simple assumptions about defaults (the probability of default isthe same for all loans and there is no correlation across loans), recovery (it isfixed at 50 percent), correlation (there is none), and exposure (the fixed loanamount at maturity, it does not vary with, say, interest rates or FX rates). Thismodel would be perfectly suitable and realistic if these underlying assump-tions were realistic. The problem is that the assumptions about the underlyingdefaults, recoveries, and exposures are not realistic for a real-life portfolio.These assumptions are what make the model unrealistic, not the model itself.

This model also highlights why real-world credit risk modeling is sodifficult. The difficulty arises not so much from solving the credit modelonce risk factors are known, as from estimating the risk factors underlyingthe credit process itself. Each of the four factors must be parameterized andestimated:

1. Default probability (or, in a more general model, transitions betweenstates) must be estimated for each and every firm. Almost by definitiondefault is not observed historically for a solvent firm, so one cannot na-ively use history to estimate the probability of default; it must be builtfor each firm, appropriately accounting for each firm’s particular cir-cumstances, circumstances that will include elements such as currentcredit status, current debt level versus assets, future prospects for theirregion and business, and so on.

2. The correlation or dependence structure across firms must be estimated.Again, because default is rarely observed, one cannot naively use his-tory to observe default correlation directly. Estimates of correlationmust be built indirectly using models that make reasonable projections,accounting for firms’ particular circumstance. Since correlation has ahuge impact on the shape of the loss distribution, it is critically impor-tant, but such dependence is difficult to measure and estimate.

3. The exposure upon default must be calculated. This can sometimes, butnot always, be done using market pricing models. It can be particularlydifficult for derivative products such as interest rate swaps with whichthe exposure varies with market variables such as interest rates.4

4. The recovery upon default or loss given default must be projected. Re-covery clearly has a big effect, as it can vary from 100 percent recovery,which implies that default has little monetary impact, to zero percent

4Marrison (2002, ch. 17) has a nice discussion of exposures from a variety of prod-ucts that a bank might deal in.

Credit Risk 389

Page 409: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:9 Page 390

recovery. Recovery is very difficult to project, and actual recovery ratescan be quite far from expectations.5

Estimating these is a major analytical, data collection, and data analysisproject. With realistic assumptions for defaults, dependence, recovery, andso on, the stylized model discussed here would be very realistic. The diffi-culty is that arriving at such realistic assumptions is a complex undertaking.

Note that, just as for market risk, we can conceptually split the model-ing into two components. The first is external: default and recovery. Thesecond is internal: monetary exposure to the default. Jorion (2006, 247)puts the situation as follows when discussing market risk: ‘‘The potentialfor losses results from exposures to the [market] risk factors, as well as thedistribution of these risk factors.’’ In the present case, the analogue of mar-ket risk factors are defaults and recoveries or the drivers underlying them,while the monetary exposure at default is the exposure to those risk factors.

Models discussed further on, such as KMV, CreditMetrics, Cred-itRiskþ, produce more realistic results for the probability of default of indi-vidual loans, the dependence structure (correlation across loans), and so on.In fact, a useful way to view such models is as methods for deriving realisticdefault probabilities, dependence structures, and so on, in the face of lim-ited current and past information on counterparties and their credit status.Solving for the loss distribution, once the default probabilities and so on areknown, is not conceptually difficult (although it will often be computation-ally demanding).

The fundamental problem in credit risk, highlighting the contrast withmarket risk, is that good history on defaults and recoveries is rudimentary,knowledge of current status is incomplete, and projections of future states(default probabilities, and so on) are very difficult. One must turn to model-ing the underlying economic and financial drivers to try to derive realisticestimates. In practical applications, one must spend much time andeffort on both making the assumptions reflect reality, and building andsolving the model. The actual implementation of a credit model is verydemanding, requiring substantial resources devoted to analytics, data, andprogramming:

& Analytics: Working through the analytics of building and solving themodel.

5 Following Lehman’s 2008 default, recovery on CDS contracts covering Lehmanbonds was about 10 cents on the dollar. Before the settlement of those contracts, itwas usually assumed that recovery would be on the order of 40 percent, not 10percent.

390 QUANTITATIVE RISK MANAGEMENT

Page 410: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:9 Page 391

& Data: Collecting and analyzing data to measure how risks differ acrossloans and counterparties, categorizing according to different defaultprobabilities, measuring and modeling dependence structure, quantify-ing exposures, and estimating recoveries. Data can be split into externaland internal data:& External data

& Default probability of individual issuers—requires collecting largeamounts of data on each individual risk to estimate likely defaultprobability. Much of what KMV and CreditMetrics do is this,even before getting to the modeling.

& Dependence or correlation across defaults—critical to get rightbut intrinsically difficult because there are not much data (defaultis a rare event) and because dependence is certainly nonstationary,changing with the state of the economy and time in ways difficultto measure (again, because of data limitations).

& Recovery—this depends on many uncertainties, but even just get-ting the legal priorities right (in the event of default) is data-intensive.

& Internal data& What are the exposures? This is not always a trivial exercise,

partly because the data are often dispersed around the firm, andpartly because credit exposure will include not just homogenousloans, but heterogeneous exposures such as loans, bonds, counter-party exposure on swaps, and settlement exposures, all acrossdisparate units within the organization.

& Programming: Solution of credit models usually requires large-scalesimulation, since analytic methods are not feasible. Getting this towork means substantial programming (separate from the systems workto manage data).

The stylized model I have introduced here provides a framework forunderstanding how credit models work, and a foil for illustrating how andwhy the concepts are simple, while realistic implementations are complex.Later sections will survey some of these more realistic models, but for nowwe return to this stylized model.

VaR and Econom i c Cap i t a l

We now return to using the distribution shown in Figure 11.3 to answerhow much a firm might lose in adverse circumstances. Table 11.2 showsthe cumulative probability, defaults, and income for part of the lowertail (distribution function rather than the density function displayed in

Credit Risk 391

Page 411: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:9 Page 392

Figure 11.3). The average income is $59,350. We can see from Table 11.2that the 1%/99% VaR is a loss (compared to average) between $4,520 and$5,085, while the 0.1%/99.9% VaR is a loss between $6,215 and $6,780.6

Marrison (2002, 229) has a succinct description of what the CFO of abank might require from a credit risk modeling exercise such as we haveconducted:

& Provisions—Amounts set to cover the expected losses over a period—this would be the expected losses.

& Reserves—Amount set to cover losses for an unusually bad year—maybe set at the 5 percent quantile (5%/95% VaR) of the distribution.

6Note that the probability of actually losing money outright from this portfolio islow (if the assumptions about the underlying loans were valid). It might be reason-able to measure income relative to costs, where costs might be the original loan plussome cost of funds. If the cost of funds were 5 percent (versus promised interest of6.5 percent), then average actual income less costs would be $9,350.

TABLE 11.2 Statistics for Income Distribution for Portfolio of 1,000 HomogeneousLoans

Mean and Standard Deviation

Mean $59,350Standard Deviation $ 1,778

Lower Tail of Distribution

Cumulative Probability Defaults Income Income versus avg

0.08246 15 $56,525 �$2,8250.04779 16 $55,960 �$3,3900.02633 17 $55,395 �$3,9550.01378 18 $54,830 �$4,5200.00685 19 $54,265 �$5,0850.00321 20 $53,700 �$5,6500.00146 21 $53,135 �$6,2150.00066 22 $52,570 �$6,7800.00026 23 $52,005 �$7,345

Note: This is the one-year income and associated cumulative probability (distribu-tion function rather than the density function displayed in Figure 11.3) from holdinga portfolio of 1,000 homogeneous loans, each with average probability of default of0.01. Loss given default is 50 percent, promised interest income is 6.5 percent.Reproduced from Table 5.5 of A Practical Guide to Risk Management, # 2011 bythe Research Foundation of CFA Institute.

392 QUANTITATIVE RISK MANAGEMENT

Page 412: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:9 Page 393

& Economic Capital—Loss level for an extraordinarily bad year—may beset at the 0.1 percent quantile (0.1%/99.9% VaR) of the distribution.

The expected income is $59,350, as calculated earlier. We might wantto set reserves, an amount in case defaults are higher than expected, at the5%/95% VaR level, between $2,825 and $3,390. We might set capital at$6,500, roughly the 0.1%/99.9% VaR.

Setting economic capital is a difficult problem. Economic capital is dis-tinguished from regulatory capital because it is set in response to economiccircumstances rather than regulatory or accounting rules. Economic capitalsupports a firm’s risk-taking activities, providing the buffer against lossesthat would otherwise push the firm into bankruptcy. McNeil, Frey, andEmbrechts (2005, section 1.4.3) lays out the following process for determin-ing economic capital:

& First determine a ‘‘value distribution,’’ which is the result of quantifyingall the risks faced by the firm, including but not limited to market,credit, operational risk. (For the current simple model, if we assumethat the portfolio of 1,000 loans is the total of the firm’s business, theP&L distribution shown in Figure 11.3 and Table 11.2 is this valuedistribution.)

& Second, determine an acceptable probability of default (solvencystandard) appropriate for the institution and horizon. A usefulbasis is company ratings and associated default rates. For example,a firm might target a Moody’s Aa rating. Historical analysis ofMoody’s-rated Aa institutions shows a one-year default frequency of0.03 percent.7 The firm would want a level of capital high enoughso that losses would be worse (implying bankruptcy) only with prob-ability 0.03 percent.

& Finally, calculate economic capital as the appropriate quantile (bufferneeded to ensure bankruptcy with probability chosen in the secondstep). For a 0.03 percent probability of bankruptcy, that would be theZ ¼ 0.03 percent/99.97 percent quantile. (For the current simple loanportfolio example, it would be roughly $7,300.)

Although the conceptual process for calculating economic capital isstraightforward, the practical issues are challenging.

7 See, for example, Crouhy, Galai, and Mark (2003, table 8.3) in which they citeCarty and Lieberman (1996); or see Duffie and Singleton (2003, table 4.2).

Credit Risk 393

Page 413: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:9 Page 394

Dependence , Corre l a t i o n , Asymme try ,and Skewness

The stylized model of this section has intentionally been kept simple but itis important to extend it in one particular direction, the dependence acrossloans. As noted earlier, asymmetry or skewness is an important characteris-tic of credit risk and dependence across loans is a major reason forasymmetry.

The model as formulated so far produces a symmetric default distribu-tion and loss distribution—virtually no asymmetry. This occurs for evenmoderate numbers of loans and the reason is easy to see: loans are assumedindependent, the default distribution is binomial, and the binomial distribu-tion tends to the normal for large n.

It is easy to produce asymmetry, however, by the very natural mecha-nism of dependence across defaults. That is, probability of default for a givenloan is higher when other loans or firms also default. The phenomenon offirms defaulting together is both easy to understand and often observed.Probability of default may go up and down for one of two reasons: commonfactors to which firms respond, or contagion. Common factors would besomething to which all firms respond, such as an economic recession thatmakes default more likely for all firms. Contagion would be something thatalters perceptions or behavior following an initial default, such as heightenedinvestor scrutiny of corporate accounts following Enron’s collapse—whichmight lead to the uncovering of malfeasance (and default) at other firms.

We will concentrate for now on common factors. Dependence herearises because probability of default changes systematically for all firms. Inthe simplest example, there might be two states of the world: low probabil-ity and high probability of default. We might be measuring probability ofdefault in a particular year. The low probability regime corresponds to ayear when the economy is growing. The high probability regime corre-sponds to a year when the economy is in recession.

Dependence across firms arises not because one default causes another,but because when we look at the future we do not know whether the nextyear will be a low or high probability year. If, however, a default were tooccur, then it is more likely that it is a high default regime and thus morelikely there will be other defaults. When we are in a low default or highdefault regime, there is no correlation across defaults, but today we do notknow if next year will be low or high. This means the unconditional distri-bution (sitting here today and not knowing whether next year will be a lowor high default regime) defaults next year look correlated.

This correlation or dependence across defaults will generate skewnessas defaults cluster together. There will be relatively few defaults most of the

394 QUANTITATIVE RISK MANAGEMENT

Page 414: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:9 Page 395

time, but default probabilities are periodically higher and will create an epi-sode with many defaults. Defaults may not happen often but when they dothere will be many—the upper tail of the default distribution (the lower tailof the P&L distribution) will be fat with defaults.

To fix ideas, let us look at a simple example. Consider a two-stateworld. There is a low-default regime in which the default probability is0.07353, and a high-default regime in which the default probability is0.025. Firm defaults are independent in each regime so that in each regimethe distribution will be binomial (symmetric). In each of these two regimes,the income for our stylized portfolio of 1,000 loans will be as shown inPanel A of Figure 11.4.

We now consider the overall situation, which is a mixture of the tworegimes. We will assume that at any point in time there is an 85 percentprobability we are in the low-default regime, and a 15 percent probabilitythat we are in the high-default regime. At a particular time we are in one orthe other, but we don’t know which beforehand. With this set-up, the over-all average default probability is 0.01, just as it was originally. But now wehave correlation across firm defaults, correlation of 0.004. If one particularfirm defaults, it is more likely we are in the high-default regime, and thusmore likely that other firms will also default—not because of the default ofthe first firm but simply because defaults for all firms are likely to be higher.

The overall distribution of income will be a mixture of the two distribu-tions for the individual regimes. This is shown in Panel B of Figure 11.4 andwe see that it is, naturally, skewed or asymmetric, with a fat lower tail. Theasymmetry arises because the overall distribution is composed of a largepart of the high-income (low default) distribution and a smaller part of thelow-income distribution, and the low-income distribution skews the lowertail of the overall distribution.

The mixing of good (low default) and bad (high default) worlds natu-rally produces correlation across defaults and skewed distributions; the cor-relation and skewness go hand-in-hand. In either the good or the bad world,defaults will tend to be symmetric. But at some times we are in the highdefault world and will thus have many defaults, while at other times we arein the low default world and will have few defaults. The larger number ofdefaults during bad times produces both the skewness, or fat upper tail ofthe default distribution (fat lower tail of the income distribution), and thecorrelation (since defaults tend to happen together).

This simple model also helps to explain why credit losses can be so dev-astating to a firm. The simple story that credit losses are asymmetric be-cause there are ‘‘many small gains and a few large losses’’ is manifestly nottrue—we saw earlier that the default and P&L distribution quickly becomessymmetric for defaults independent across firms. In a more subtle form,

Credit Risk 395

Page 415: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:9 Page 396

Probability

High Default

Low Default

0.06

0.08

0.10

0.12

0.14

0.04

0.02

05654 49 53 57 61

A. Low- and High-Default Regimes

Probability

0.06

0.08

0.10

0.12

0.14

0.04

0.02

045 49 53 57 61 65

B. Mixture

Income ($ thousands)

Income ($ thousands)

FIGURE 11.4 Mixing Distributions: Income Distributions for Low- and High-Default Regimes (Panel A) and Mixture (Panel B)Notes: Panel A shows the distribution for the one-year income from holding a port-folio of 1,000 homogeneous loans, each with average probability of default of 0.02(high-default regime) and 0.008235 (low-default regime). Loss given default is 50percent, promised interest is 6.5 percent. Panel B shows the income distribution for amixture that is 15 percent high-default and 85 percent low-default. Reproducedfrom Figure 5.18 of A Practical Guide to Risk Management,# 2011 by theResearch Foundation of CFA Institute.

396 QUANTITATIVE RISK MANAGEMENT

Page 416: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:10 Page 397

however, the story contains some truth. When unusually bad events occur,they involve, naturally, an unusually large number of defaults. These de-faults mean large losses. Correlation across defaults means that when thingsgo bad, they go really bad—in a bad state of the world, losses look like theleft line in Panel A of Figure 11.4 where the world is really bad.

Mixing is a natural mechanism in credit risk to produce default correla-tion. It also helps to explain why credit risk can be difficult to model andmanage. When things go bad, they can go very bad. If the distributions inFigure 11.4 are for the P&L over a year, then most years will be prettygood—the low default regime shown in Panel A. Periodically, however,things will go bad and losses will look like the high-default regime in Panel A.

The default correlation for this example is only 0.004 but produces sub-stantial skewness. If we compare with Figure 11.3, where the overall proba-bility of default is also 0.01 but there is no default correlation and littleskewness, the 1%/99% VaR for the losses is between 17 and 18 defaults, orroughly $54,600 ($4,750 below the mean). For Figure 11.4, the 1%/99%VaR is between 33 and 35 defaults, or roughly $46,000 ($13,350 below themean). Even the low default correlation of 0.004 produces substantial skew-ness. There is only a small chance of the bad world (many defaults), butwhen it occurs, it produces substantially lower income, and it is exactly thelow-probability left end of the tail that determines the VaR. It requires onlya tiny default correlation to produce substantial skewness or asymmetry.

This model of mixing just a good and a bad state is clearly too simplis-tic, but it does illustrate two points. First, how correlation can be producednot because firms depend on each other, but because all firms respond to thesame underlying factors (in this case, either high or low defaults). Second,that it takes only a very low level of correlation to produce a substantialdegree of skewness or asymmetry in the default and loss distribution. (Thisdiffers from market price loss distributions, where we usually develop ourintuition, and where small changes in correlation do not dramaticallychange the shape of the distribution.)

A more realistic model for the dependence structure, a simplified varia-tion of that used by credit models such as KMV, is the threshold model andfactor structure. Default is assumed to occur when some random criticalvariable Xi falls below a critical threshold di:

default when Xi < di

Each loan or firm may have its own critical variable Xi and its own crit-ical threshold di. For the simple homogeneous model considered here,where all loans are identical, all the di will be the same and all the Xi willhave the same distribution.

Credit Risk 397

Page 417: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:10 Page 398

Credit models of this form (based on the Merton [1974] approach anddiscussed more further on) build the relationship between the critical variableand the threshold from realistic economic and financial relationships, basedon historical data and company analysis. For example, the Xi might bethe value of the firm (which may go up or down randomly) and di the no-tional value of loans the firm has taken out. The firm goes into default whenthe value of the firm falls below the value of the loans, so default occurs whenXi < di. The important point for now, however, is simply that there is somereasonable story that justifies the relationship between the random variableXi and the fixed threshold di, where default occurs whenXi < di.

If we assume that Xi is normally distributed with zero mean and unitvariance, then the probability of default is:

Probability of default: P½Xi < di� ¼ F½di� ð11:1Þ

If all the Xi are independent, then the probabilities of default are inde-pendent and the model is exactly what we have discussed so far. It is, how-ever, easy to build dependence across defaults by introducing correlationacross the critical variables {Xi} for different loans or firms. Take theextreme, where X1 and X2 are perfectly correlated. Then firms 1 and 2 willalways default together (or not default together). In the general case, X1 andX2 will be less than perfectly correlated and the default correlation will bedetermined by the correlation of the Xi. The higher the correlation acrossthe Xi, the higher the correlation in defaults.

We can calculate the default correlation, given the individual firm de-fault probabilities and the critical variable correlation. Define Yi as thefirm-i default indicator (that is, Yi ¼ 0 when no default, Yi ¼ 1 for default,see McNeil, Frey, and Embrechts, 2005, 344), then:

Average probability of default ¼ PðYi ¼ 1Þ ¼ E½Yi� ¼ p�i¼ P½Xi < di� ¼ F½di�

Average probability of joint default ¼ P½Yi ¼ 1&Yj ¼ 1� ¼ E½Yi � Yj�¼ P½Xi < di&Xj < dj�

varðYiÞ ¼ E½Y2i � � E½Yi�2 ¼ E½Yi� � E½Yi�2 ¼ p�i � p�2i

Default correlation ¼ E YiYj

� �� p�i p

�j

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

p�i � p�2i� �

p�j � p�2j

r ð11:2Þ

398 QUANTITATIVE RISK MANAGEMENT

Page 418: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:10 Page 399

The term E[YiYj] is the joint probability that both firms default. In thecritical variable framework, this translates into a statement about the jointdistribution of Xi and Xj. Since we commonly assume that Xi and Xj arejointly normal, this will be a probability statement about a bivariate stan-dard normal with a given correlation.

Say we have a default probability p�i ¼ P½Xi < di� ¼ F½di� ¼ 0:01. Thisimplies di ¼ �2.3263 (Xi is a standard normal variable and the probabilitythat a normal variable is below �2.3263 is 0.01). Say the correlation acrossthe critical variablesXi andXj is r ¼ 0.05. Then E[Yi�Yj] ¼ P[Xi < di & Xj <dj]¼ 0.0001406 (the joint probability that two standard normals with corre-lation 0.05 will both be below �2.3263 is 0.001406—see, for example, theapproximation in Hull 1993, appendix 10B, or the functions built into Wolf-ram’sMathematica).8 Inserting these values into the expression 11.2 gives:

Default correlation ¼ 0:004 given individual firm defaults ¼ 0:01critical variable correlation ¼ 0:05

The critical variable correlation and the default correlation are both low,but these are typical values for observed credit defaults. McNeil, Embrechts,and Frey (2005, table 8.8) provide estimates of pairwise correlations fromdefault data for 1981 to 2000. They find one-year default probability forBB-rated issuers of 0.0097 and pairwise default correlation of 0.0044.9

There are many ways to introduce such correlation but, as arguedearlier, mixing through a so-called common factor model is a particularlyuseful form. We split the critical variable into two components, a commonand an idiosyncratic factor:

Xi ¼ BF þ cei ð11:3Þ

with

F ¼ common factor, random variable �N(0, 1)

ei ¼ firm-specific independent idiosyncratic variable, �N(0, 1)

B, c ¼ coefficients chosen to ensure Xi remainsN(0, 1)

8 The approximation in Hull, which Hull credits to Drezner (1978; with Hullcorrecting a typo in Drezner’s paper) produces slightly different values from Mathema-tica’s Binormal Distribution, particularly in the tails. I presume the Hull and Dreznerapproximation is less accurate. See http://finance.bi.no/�bernt/gcc_prog/recipes/recipes/node23.html for an implementation of the Hull and Drezner algorithm in C.9 For BBB, p� ¼ 0.0023 and default correlation ¼ 0.00149, while for B, p� ¼ 0.0503and default correlation ¼ 0.0133.

Credit Risk 399

Page 419: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:11 Page 400

The common factor F represents elements that affect all firms together.These might be common economic factors such as economic growth or thelevel of interest rates, or common industry conditions such as when airlinecompanies are financially pressured as the relative price of energy rises.The firm-specific variable ei represents factors that are specific to the indi-vidual firm and are independent across firms.

A common factor structure such as 11.3 is often used in practice, and itrepresents a reasonable and practical representation for how firms’ defaultsmight move together. It says that conditional on the state of the variable F(the state of the economy or industry) firms are independent, but that firmsdefault together because they are all affected by the common factor.10 Thedefault correlation is induced through the common factor F, but conditionalon a particular value of F firms are independent.

In summary:

& The common factor (F) is the same for each firm.& The firm-specific component (ei) affects only the particular firm i and is

completely independent of any other firm.& The correlation in the critical variable is controlled by the relative im-

portance of the common factor (F) and the firm-specific component (ei).

The simplest form for a common factor model, where all loans or firmsare assumed identical, is the equicorrelation factor structure:

Xi ¼ ffiffiffi

rp

F þffiffiffiffiffiffiffiffiffiffiffi

1� rp

ei ð11:4Þwith

F ¼ common factor, random variable �N(0, 1)

ei ¼ firm-specific independent idiosyncratic variable, �N(0, 1)

r ¼ proportion of variance attributable to common factor. This willalso be the correlation in the critical variable across firms.

10This is a very simple form of a factor structure, with a single common factor F thathas the same effect on all firms. In practical applications, there may be more than onecommon variable F, and individual firms may be affected by the common factors indifferent ways (each firm may have its own coefficient B). The important point, how-ever, is that there will be a small number of factors that are common to a large num-ber of firms. Such a common factor structure models common economic or industryfactors (either observed or latent) but does not capture contagion that alters percep-tions following an initial default (e.g., heightened investor scrutiny of corporateaccounts following Enron’s collapse). Nonetheless, most practical model implementa-tions use a form of common factor structure and do not model contagion directly.

400 QUANTITATIVE RISK MANAGEMENT

Page 420: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:11 Page 401

The r term is the link between the common and the idiosyncratic com-ponents, determining how much of the overall variance for the critical vari-able Xi is due to the common factor versus the idiosyncratic factor.

Even a low value for r can give quite substantial asymmetry. Figure 11.5shows the default and loss distribution for the model with r ¼ 0 and r ¼0.05 (producing default correlation 0.004). The r ¼ 0 case is what we havediscussed so far: no dependence across loans and a binomial distributionfor defaults, resulting in a symmetric distribution for defaults and P&L.The r ¼ 0.05 represents dependence across loans. The dependence is nothigh but it still produces substantial skewness.11 The skewness is obvious inFigure 11.5. We can also measure it by calculating the VaR. For the r ¼ 0case, the 1%/99% VaR is between 18 and 19 defaults (out of 1,000 loans)and a P&L between $4,520 and $5,085 below the mean. For r ¼ 0.05, the1%/99% VaR is dramatically higher: defaults between 34 and 35 and P&Lbetween $13,560 and $14,125 below the mean.

The example so far has assumed that all loans are identical, so that thedefault probability is equal across loans and the distribution (for no correla-tion) is binomial. Introducing heterogeneity in the loans (while maintainingindependence) does not change anything significantly. When independent,the default distribution tends toward symmetry, while correlation breaksthe symmetry and produces a skewed distribution.

As discussed in Section 11.6, the threshold framework discussed herecan be translated to a Bernoulli mixture framework. In a Bernoulli mixtureframework, the default process is conditionally independent, with correla-tion structured as mixing across independent realizations. This helps clarifythe mechanism that produces the skewed distribution seen in Figure 11.5:Each realization (conditional on a value of the common factors F) is bino-mial, or more generally, Bernoulli, and so will tend to be symmetric for alarge number of loans. The correlation introduced by the factors F meansthat some realizations (states of the world) will have high default rates, pro-ducing many defaults, while other states of the world will have few defaults.The tendency for firms to default or not together produces the fat upper tailof the default distribution (fat lower tail of the income distribution).

In this example, the correlation across the threshold variables is 0.05but as pointed out earlier, for pi ¼ pj ¼ 0.01, this translates into a correla-tion across defaults of only 0.0040. Defaults are rare, so default correlationsare by their nature low. This highlights one of the difficulties of credit riskmodeling: default correlations are small and can be difficult to measure inpractice, but the distribution of defaults will be substantially affected by

11 The ‘‘dependent’’ line in Panel B of Figure 11.5 reproduces Figure 11.1 and isclose to McNeil, Frey, and Embrechts (2005, fig. 8.1).

Credit Risk 401

Page 421: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:11 Page 402

Probability

Dependent

Dependent

Independent

Independent

0.14

0.06

0.08

0.10

0.12

0.04

0.02

0540 105 15 25 30 3520 40

A. Number of Defaults

Probability

0.14

0.06

0.08

0.10

0.12

0.04

0.02

045 49 53 57 61 6347 51 55 59 65

B. Income

Defaults

Income ($ thousands)

FIGURE 11.5 Number of Defaults for a Portfolio of 1,000Homogeneous Loans—Alternate Dependence AssumptionsNote: Panel A is the number of defaults from holding a portfolio of1,000 homogeneous loans, each with average probability of de-fault of 0.01. The probability of default is as in (A) with the com-mon factor structure as in (B). The Independent case has thethreshold correlation r ¼ 0, the Dependent case has critical varia-ble correlation r¼ 0.05, and default correlation 0.004. Panel B isthe one-year income from holding such a portfolio with loss givendefault at 50 percent, and a promised interest rate of 6.5 percent.Reproduced from Figure 5.19 ofA Practical Guide to RiskMan-agement,# 2011 by the Research Foundation of CFA Institute.

402 QUANTITATIVE RISK MANAGEMENT

Page 422: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:12 Page 403

correlation. In Figure 11.5, the distribution for 1,000 loans with correlateddefaults is substantially more fat-tailed than the independent distribution.The correlation across defaults is tiny, however, only 0.004.

One final note before closing the discussion on distributions and asym-metry: The asymmetry in the default distribution arises naturally out of thedependence and mixing of Bernoulli (default) processes, even with theunderlying threshold variables and mixing distributions being normal andcompletely symmetric. For credit risk, the fat tails and asymmetry arise al-most of necessity. Furthermore, there does not seem to be any substantialimpact from using distributions that are themselves fat-tailed.12

D i vers i fica t i o n and Corre l a t i on f o r Cred i t R i sk

The analysis of dependence and credit risk is challenging. Much of oureveryday experience and understanding of correlation and diversificationcomes from the arena of market risk, as applied to correlation across pricesand returns. This experience and knowledge does not always carry over wellto defaults and credit risk. Default is a rare event. Correlation across de-faults will usually be close to zero, but even low correlations can have quitea dramatic impact. The degree of skewness and the sensitivity of the distri-bution to small changes in the degree of correlation provide yet anotherexample of why credit risk modeling is so difficult: measuring the degree ofdependence in the real world is difficult, but dependence makes such a largedifference in the shape of the distribution that it is especially important tomeasure precisely.

There are two underlying reasons intuition developed around diversifi-cation of stocks does not carry over well to diversification of defaults andcredit risks:

1. Correlations for credit defaults are very low (close to zero) andsmall changes in correlation have a large impact on the degree ofdiversification.

2. Correlation fundamentally changes the shape of the returns distribu-tion, making it skewed with a fat lower tail.

12 This appears to conflict with the conclusion of McNeil, Frey, and Embrechts(2005, section 8.3.5), which appears to show substantial differences between using amultivariate normal versus Student t critical variable. This is due to the embedded(nonlinear) dependence of the usual multivariate Student t. In my view such depen-dence is better treated as additional dependence, or mixing, in addition to thatexplicitly modeled by the common factors, rather than a result of the functionalform of the threshold variable.

Credit Risk 403

Page 423: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:12 Page 404

There are two rules of thumb regarding portfolio diversification that welearn from traded stocks: first, that a moderate number of assets (on theorder of 10 to 30) is sufficient to reap most diversification benefits; and sec-ond, that small differences in correlation do not have a large impact on thebenefits of diversification. These rules of thumb are valid for moderate cor-relations between assets (say r � > 0.3) but do not hold when correlation isclose to zero, as it invariably is for credit portfolios. Default is a rare event,and so correlations between defaults will be low; for single-A credits, corre-lation might be 0.001 or less, while for single-B credits, it might be on theorder of 0.015. Diversification for credit risks requires a large number ofassets, and small differences in correlation can have a dramatic impact onrisk, particularly in the tails.

The portfolio and diversification effect of low correlation does not de-pend on discrete losses and can be easily demonstrated for a portfolio ofassets with continuous returns. Consider an equally weighted portfolio of nidentical assets with continuous, normally distributed returns. For each as-set, the mean return is m, the standard deviation s, and the pairwise correla-tion r. The average return for an equally weighted portfolio will be m andthe standard deviation s

p[r þ (1 � r)/n]. The overall portfolio return will

also be normal.For a large portfolio, that is, as n!1, the portfolio standard deviation

will go to sffiffiffi

rp

, which we can call the systematic, or nondiversifiable, com-ponent of the portfolio volatility. For a portfolio of size n, the residual ordiversifiable component of the volatility will be s

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi½rþ ð1� rÞp

=n� �sffiffiffi

rp

.Figure 11.6 shows the effect of correlation and portfolio size on diversi-

fication by way of two examples: correlation r ¼ 0.3, which might representmarket assets, and r ¼ 0.01, which might represent credit default correla-tions (for something between BB and single-B corporate bonds). For the r ¼0.3, the portfolio standard deviation falls quickly as n increases and most ofthe diversification effect is apparent with only 20 assets—the residual is only6 percent of the systematic standard deviation. For the lower correlation,the systematic component is far lower but the residual component falls offmuch more slowly—for 20 assets, the residual is still 144 percent of the sys-tematic component.13

Low correlation also has the effect of making the systematic portfoliovolatility very sensitive to small changes in the level of correlation. Considera portfolio of assets with correlation r ¼ 0.3. A single asset has volatility s,

13One can measure the portfolio size at which one-half and three-quarters of thelogarithmic reduction in standard deviation has been realized by calculating n� ¼ (1� r)/(r0.5 � r) and n� ¼ (1 � r)/(r0.75 � r). For r ¼ 0.3, this is 2.8 and 6.6 assets,while for r ¼ 0.01, it is 11.0 and 45.8 assets.

404 QUANTITATIVE RISK MANAGEMENT

Page 424: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:12 Page 405

while a fully diversified portfolio has systematic volatility 0.55�s. Correla-tions may be hard to estimate, so possibly the true correlation is r ¼ 0.35instead of 0.3, which gives a systematic volatility of 0.63�s, or 15 percenthigher.

Now consider a credit portfolio with default correlation r ¼ 0.001(roughly that of BBB credits), where a fully diversified portfolio has system-atic volatility of only 0.032�s. This is a huge diversification benefit, as itshould be, since joint defaults will be rare for low correlation. But default isa rare event and default correlations difficult to estimate, meaning they willhave a large standard error. Say the true correlation were in fact r ¼0.002—a difference of only 0.001. In this case, a fully diversified portfoliowould have systematic volatility of 0.045�s, 41 percent higher. A very smallerror in estimating the correlation has a very large impact on the diversifiedportfolio’s volatility.

Figure 11.7 shows the impact of changes in correlation graphically.Panel A shows the normal distribution for correlation r ¼ 0.3 and for 0.1higher (r ¼ 0.4). The distributions are not very different. Panel B showsthe normal distribution for r ¼ 0.012 and for 0.1 higher (r ¼ 0.112). Thedistributions are quite different.

Credit default correlations tend to be quite low. McNeil, Embrechts,and Frey (2005, table 8.8) provide estimates of pairwise correlations fromone-year default data for 1981 to 2000. They find a pairwise default

o

o

oo

o o o o o o o o o o o o o o o o

rho=0.3

rho=0.01

residual 6%of systematic

residual 144%of systematic

n=20

rho=0.01 systematic

0 5 10 15 200

0.4

0.8

FIGURE 11.6 Diversification Effect (RelativeStandard Deviation) for an Equally Weighted Portfolioof Identical AssetsNote: This shows the standard deviation as a functionof the number of assets in a portfolio of identical assetson a standard scale.

Credit Risk 405

Page 425: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:12 Page 406

A. High Correlation

Corr=0.300

0.400

–100,000 100,000 200,000

B. Low Correlation

Corr=0.012

0.112

50,000–50,000 100,000

FIGURE 11.7 Impact of a 0.1 Change inCorrelation for High and Low CorrelationNote: This shows the one-year income forequally weighted portfolios of 1,000assets. All assets have the same mean re-turn (3.675 percent) and standard devia-tion (11.9 percent). Returns are assumedlog-normally distributed with return forasset i as Ri ¼ F

pr þ ei

p(1 � r) with F a

common factor and ei the idiosyncraticcomponent (independent of other e and F).Normalizing the variance of F and e toboth be s (the variance of an individual as-set, 11.9 percent here) the parameter r isthe correlation across assets.

406 QUANTITATIVE RISK MANAGEMENT

Page 426: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:13 Page 407

correlation for BBB-rated issuers of 0.00149 and for B-rated issuers of0.01328. As a result, credit portfolios will gain diversification benefits fromlarge portfolio sizes, larger than one would expect based on experiencefrom portfolios of standard market-traded assets. Note that, as is apparentfrom Figure 11.7, gains from diversification for a credit portfolio with lowcorrelation can be quite substantial.

Low correlation for credit portfolios also means that the P&L distribu-tion will be sensitive to small changes in correlation. A change of 0.1 willnot have a huge effect when the correlation is 0.3 (as it might be formarket-traded assets) but will have a large impact when the correlation is0.01 (as it will be for default correlations).

The effects of diversification and correlation that we have been consider-ing are a result of the correlation being close to zero and have nothing to dowith the all-or-nothing aspect of defaults. The effect is the same for a portfo-lio of continuous market-type assets and for a portfolio of defaultable assets.

The important distinction between continuous and all-or-nothingdefaultable assets arises in how the distribution’s shape changes with correla-tion. Market-traded assets are usually normal or close to normal and chang-ing the correlation will change the spread of the portfolio return (thestandard deviation) but not the shape. For credit risk, in contrast, introducingthe correlation across credit risks induces asymmetry and a substantially fatupper tail to the default distribution (fat lower tail in the P&L distribution).

Figure 11.8 compares a normal distribution with a simulated defaultdistribution. Both examples are a portfolio with 1,000 identical assets withan expected return of 3.675 percent and standard deviation 11.9 percent.For Panel A, each asset has a continuous (normal) return with an expectedreturn of 3.675 percent, and the overall portfolio has a normal return. PanelA shows both independent assets and assets with correlation r ¼ 0.012; inboth cases, the distribution is normal. For Panel B, in contrast, each asseteither does not default (probability 99 percent, return 6.5 percent) or doesdefault (probability 1 percent, return �50 percent). The solid line shows theportfolio P&L for independent assets, which produces a distribution verymuch like the market assets—hardly surprising, since the independent caseis binomial, which tends to normal for large n. The dashed line is a portfoliowith default correlation 0.012. The distribution has substantial skew. Thestandard deviation is close to that for market-traded assets, but the distribu-tion itself has a fat lower tail and is decidedly not normal.

De f au l t P rocess versus De f au l t P arame ters

The default and dependence parameters are the most important aspect of acredit model. The detailed mechanism generating defaults (conditional on

Credit Risk 407

Page 427: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:13 Page 408

B. Credit Assets (Loans)

A. Market Assets

Corr=0.000

0.012

20,000 40,000 60,000

20,000 40,000 60,000

Independent

Correlated0.02

0.04

FIGURE 11.8 Income for a Portfolio of Market Assets (Panel A)versus Loans or Credit Assets (Panel B)—Showing HowCorrelation for Credit Assets Produces a Skewed DistributionNote: Panel A is the one-year income for an equally weightedportfolio of 1,000 market assets. All assets have the same meanreturn (3.675 percent) and standard deviation (11.9 percent).Returns are assumed log-normally distributed, with return forasset i as Ri ¼ F

pr þ ei

p(1 � r) with F a common factor and ei the

idiosyncratic component (independent of other e and F).Normalizing the variance of F and e both to s (the variance of anindividual asset, 11.9 percent here), the parameter r is thecorrelation across assets. The variance of an equally weightedportfolio of n assets is [r þ (1 � r)/n]s2. Panel B is the one-yearincome from holding a portfolio of 1,000 homogeneous loans,each with average probability of default of 0.05. The probability ofdefault is as in Figure 11.1, with the common factor structure as inFigure 11.3. The Independent case has zero correlation acrossthreshold variables (r ¼ 0), and the Dependent case has r ¼ 0.05,leading to a default correlation of 0.012.

408 QUANTITATIVE RISK MANAGEMENT

Page 428: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:13 Page 409

the average probability of default and dependence across defaults) is lessimportant than the probability and dependence. The next section will dis-cuss various credit default models. There is a wide variety of models withmany important differences. The most important differences arise in howthey estimate the probability of default for individual firms and dependenceacross firms. When these are roughly the same, the results are also roughlysimilar.14

11 .4 TAXONOMY OF CRED I T R I SK MODELS

Two Areas o f Mode l i n g Cred i t R i sk

There are two areas of application and two ways in which ‘‘credit riskmodeling’’ is used. The first is credit risk management—measuring andusing the P&L distribution for a portfolio or business activity over some(usually long) period. Such credit risk modeling is the primary focus of thischapter, and models are usually static in the sense that they focus on thedistribution for a fixed time, and are concerned less with the time process ofdefault and loss or the details of when default occurs.

The second application of credit risk modeling is the pricing of credit-risky securities, whether new developments such as credit default swaps ortraditional securities such as corporate bonds. This is a large area, address-ing how to price instruments such as bonds, loans, CDS, or other creditderivatives. Models for pricing such instruments are usually dynamic in thesense of modeling the time that default or other loss occurs—that is, model-ing the stochastic process of losses. Such models are not the primary focusof this chapter.

Recognizing the two types of modeling and the distinction betweenthem is useful for two reasons. First, the techniques used in pricing ‘‘credit-risky’’ securities are often related to those used in credit risk measurement.Second, understanding the distinction between pricing ‘‘credit-risky’’securities and credit risk measurement clarifies the types of models andapproaches used.

This chapter focuses mainly on credit risk management and the modelsused for credit risks that are not market-traded. Risk management for secu-rities that are market-traded can often dispense with complicated modeling

14We will discuss more fully an exercise conducted by Crouhy, Galai, and Mark(2000, ch. 11) that compares a variety of industry models.

Credit Risk 409

Page 429: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:13 Page 410

because market risk factors, and the distribution of those factors, are availa-ble from the market.15

Bas i c Mode l i n g Taxonomy

Credit models can usefully be classified according to two separate criteria:on the one hand, whether they are static (fixed, discrete, time period) versusdynamic; and on the other hand, whether they are structural versus reducedform.16

Static models are more often applied to credit risk management. Thisaspect of credit risk modeling is focused on determining the P&L distribu-tion for a portfolio of risks over a fixed, usually long, time period. The P&Ldistribution in turn is used to compute risk measures such as VaR or eco-nomic capital, and to make risk allocation and other management deci-sions. The primary question is the distribution of defaults or other creditevents over a fixed period, and modeling is static in that the focus is onthe probability of default or change in credit status during the period, withthe timing of default being decidedly secondary. These models usually workwith the physical probability measure, in a sense to be discussed more fullyfurther on.

Dynamic models are usually applied to the pricing of credit-risky secu-rities, where the focus is primarily on the stochastic evolution of risk anddefault probability. The exact timing of default (or other credit events) mat-ters and so must be modeled explicitly. Such models are usually formulatedin continuous time, usually work under the equivalent martingale or risk-neutral measure, and are usually calibrated directly to market observations.

The structural versus reduced form categorization applies to both staticand dynamic models, in that both static (fixed period) and dynamic (contin-uous time) models may be formulated as either structural or reduced form.Structural models, which might also be called firm value models, detail thespecific financial and economic determinants of default, usually consideringassets versus liabilities and the event of bankruptcy. Such models provide

15 I need to clarify a bit. For credit risk, we need to use complicated models to gener-ate the distribution of defaults, the underlying factor that drives the loss distribution.For market risk, we generally do not need to use complicated models to generate themarket risk factors—those can be observed. We may, however, need to use compli-cated pricing models to translate those market risk factors into the prices of the in-struments we actually own. For example, we would need to use some sort of optionmodel to price a bond option given the underlying yields.16Much of this distinction draws on McNeil, Frey, and Embrechts (2005, section8.1.1).

410 QUANTITATIVE RISK MANAGEMENT

Page 430: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:13 Page 411

both qualitative and quantitative underpinnings for the likelihood of de-fault. Reduced form models, in contrast, do not describe the precise deter-minants of default but rather model the default time directly as a function ofeconomic or financial variables.

Structural models trace back ultimately to Merton (1974), which consid-ers default in terms of a firm’s assets relative to liabilities at the end of a fixedtime period. Assets are treated as a random variable, and default occurs when(random) assets are below liabilities at the end of the period. That is, defaultoccurs when the random variable (assets) cross a threshold (liabilities). Struc-tural models can be more generally termed threshold models, since defaultoccurs when a stochastic variable (or stochastic process in dynamic models)crosses a threshold. (See McNeil, Frey, and Embrechts 2005, 328)

Table 11.3 shows a basic taxonomy for credit models.

11 .5 STAT I C STRUCTURAL MODELS

Static structural models trace back to Merton (1974).17 Merton observedthat a risky bond from a company that issues both debt and equity is equiv-alent to a risk-free bond plus a put option on the company’s assets. Thismay seem simple with the benefit of hindsight, but it is a profound insightthat provides a powerful approach for thinking about determinants ofdefault.

Mer ton ’ s Mode l

Take a very simple one-year framework in which a firm has assets, V, whichare random (will go up and down in value over the next year), and the com-pany issues equity, S, and a one-year bond with a promise to pay a fixedamount, B. Default is very simple: When the value of the assets are belowthe bond payment (V < B), the bond is in default. Otherwise the bond ispaid and shareholders receive the excess. From this, we can see that theequity is a call option on the value of the assets, with a strike equal to thepromised bond payment, B.

Setting up notation:

Firm value (assets): V0,~VT : value at beginning, end of period. ~VT is thedriving variable in this model, assumed to be random, generallylog-normal.

17The Merton model is laid out in McNeil, Frey, and Embrechts (2005) section 8.2,and Crouhy, Galai, Mark (2000, ch. 8 appendix 1, and ch. 9 sections 2 to 4).

Credit Risk 411

Page 431: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:14 Page 412

Bond: B0, ~BT , B: bond value at beginning, end of period. ~BT is random(since bond may default), B is promised fixed payment.

Equity (shares): S0,~ST : share value at beginning, end of period.~ST is random.

Relation between bond, equity, firm value: V0 ¼ B0 þ S0 ~VT ¼ ~BT þ ~ST .

TABLE 11.3 Taxonomy for Credit Models

Structural or firm-valuefocus on mechanism offirm default—usuallyrelation between firm-

level assets and liabilities

Reduced form precisemechanism generatingdefault not specified—default time modeled as

random variable

Static (discrete/fixed time-period)

Application: primarilycredit riskmanagement—measuring the P&Ldistribution over afixed period

Modeling methodology:usually physicalprobability measurewith risk premium

Paradigm is Merton(1974).

Threshold models suchas KMV

Credit migration such asCreditMetrics

CreditRiskþCreditPortfolioView

Dynamic (continuoustime)

Application: primarilypricing of credit riskysecurities

Modeling methodology:usually risk-neutralprobability measurecalibrated to marketobservations

Dynamic structuralmodels are not widelyused

Single instruments (forexample, bond, loan,CDS)—modeling usingdefault time and hazardrates (intensities)

Portfolio instruments (forexample, CDO, basketswaps)—default timesand hazard rates(intensities) modeled;various assumptionsconcerning dependenceor independence ofdefault times andhazard rates

412 QUANTITATIVE RISK MANAGEMENT

Page 432: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:14 Page 413

Shareholders have the right to walk away, so ~ST 0. In other words,the equity price at T has the payout of a call option on the firm’s assets:

~ST ¼ max ~VT � B; 0� � ð11:5Þ

and the equity value at the beginning can be evaluated as a standard calloption.

The bond at T will either be paid or default, depending on whetherassets are greater or less than the promised payment B. If paid, the value isB, if defaulting, the value is ~VT . This can be written as:

~BT ¼ B�max B� ~VT ; 0� � ð11:6Þ

which is the value of a fixed payment B less the payout of a put. This meansthat the risky bond at the beginning is equivalent to a risk-free bond (thediscounted value of the fixed payment B) less a put option. Figure 11.9shows the payout for the bond and the stock at the end of the one-yearperiod.

The wonderful thing about this framework is that it provides thetwo most important pieces of information regarding credit risk: theprobability of default and the loss given default. The probability ofdefault is just the probability that the asset value is below the promisedpayment B:

P ~VT < B� �

Bond

Stock

Firm ValueB

0

Bond or StockValue

B

FIGURE 11.9 Payout Diagram for Bond and Stock at End of One-Year Period forSimple Merton Model

Credit Risk 413

Page 433: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:14 Page 414

Assuming that the asset value ~VT is log-normally distributed, so that

~VT � N ln V0ð Þ þ m� s2=2� �

T;s2T� � ð11:7aÞ

then the default probability is

P ~VT B� � ¼ P ln ~VT

� � ln Bð Þ� � ¼ Fln B=V0ð Þ � m� s2=2

� �

T

sffiffiffiffi

Tp

� �

ð11:7bÞ

This is a reasonable (although simplified) framework that provides anestimate of default probability based on company characteristics that are ormight be observable:

& Promised payment on the bond, B& Current value of assets, V0

& Volatility of the company assets, s& Average growth rate of company assets, m& Time to maturity of the bond, T

This is now a pretty complete description (although in a very simplifiedframework) of the eventual payment for this bond: the probability of de-fault is given by equation (11.7a) and the actual amount paid is given byequation (11.6), either B if no default or ~VT if default. This provides exactlywhat is necessary to start making the model of Section 11.3 more realistic—the probability of default and the loss given default (or recovery, if the loangoes into default). In particular, it provides a reasonable estimate for theprobability of default based on variables we might be able to measure. Itthus addresses the biggest challenge in credit risk modeling: how to estimatecharacteristics such as probability of default when default itself is rare anddirect information on default is not available. equation (11.7a) is essentiallya way to arrive at estimates of default probability based on other, observ-able, characteristics of the firm.

The structure of the Merton model can be exploited further to price theequity and bond, and derive expressions for credit spreads. The price ofthe equity and risky bond can be obtained from equations (11.5) and (11.6)(the terminal conditions) together with assumptions sufficient to applyBlack-Scholes-type option pricing formulae. The values today will bethe expected discounted value under a risk-neutral measure, assuming that

414 QUANTITATIVE RISK MANAGEMENT

Page 434: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:14 Page 415

the asset value ~VT is log-normally distributed but with mean return equal tothe (deterministic) risk-free rate r,18 so that:

~VT � N ln V0ð Þ þ r� s2=2� �

T;s2T� �

At the moment, however, I will not pursue such pricing further (I willreturn to the issue in Section 11.8, and good presentations can be found inthe references at the beginning of this section). The important point forcurrent purposes is that equation (11.7a) provides probability of default(and equation (11.6) loss upon default), based on economic and financialfundamentals. For credit risk management, the P&L distribution over aperiod of time is the primary focus and the probability of default is thekey item.

In equation (11.7a), default occurs when the random assets ~VT (criticalvariable) cross the critical threshold B. This is the same form as the thresh-old structure introduced in equation (11.1) in Section 11.3, but the impor-tant point is that there are now economic and financial foundations to boththe critical variable and critical value.

The form of the threshold structure can be outlined diagrammatically,shown in Figure 11.10. Default is the event in the lower tail of the distribu-tion when the firm assets fall below the promised bond payment, B.

18 The importance and implications of using the physical measure, with m, versus therisk-free measure, with r, is discussed in Section 11.8.

Random path of firm assets

Volatility of assetdistribution

Distribution of assetsat bond maturity (VT)(think of this as risingup out of the paper)

Default threshold, B

Probability of default, P[VT <B]

TimeT

B

V0

Firm Assets

Default

E(VT)

FIGURE 11.10 Distribution of Firm Assets at Bond Maturity, Showing Default

Credit Risk 415

Page 435: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:15 Page 416

Equation (11.7) and Figure 11.10 show that the important variablesdetermining default are:

& Current asset value V0.& The distribution of the asset value ~VT at time T, parametrized by the

mean and volatility, m and s.& Promised debt repayment or the book value of liabilities, B.& The time horizon, T.

From Figure 11.10, one can easily see how the probability of defaultwill change. For example, it will increase as the volatility goes up (the distri-bution becomes more spread out with more probability mass below B), anddecrease as V0 increases (the default point B becomes further away from thestarting point V0 and mean E ~VT

� �

).

Moody ’ s KMV (MKMV)

The Merton model as presented here provides a framework for thinkingabout default probability, but it is not realistic enough for practical applica-tion. The Moody’s KMV product is a commercial model descended fromMerton and was developed in the 1980s and 1990s.19 One of MKMV’smost important contributions is in the collection and analysis of a huge pro-prietary database of public and private company default and loss data. Thedata are used for development, testing, and implementation of a practicaland realistic model that follows the ideas outlined earlier.

The discussion here is brief, focusing particularly on three aspects ofMKMV’s implementation:

1. Estimation of firm assets from observable equity prices.2. Alternate default probability function, instead of equation (11.7).3. Realistic factor structure to capture dependence across firms, broaden-

ing the simple equicorrelation factor structure in equation (11.4).

These are three key components in MKMV’s transformation of the the-oretical ideas outlined in the preceding section into a functional product.My discussion follows the MKMV working paper by Crosbie and Bohn(2005) and the detailed discussion in Crouhy, Galai, and Mark (2000, ch. 9section 5), also with reference to McNeil, Frey, and Embrechts (2005,

19 KMV started as a private company named after its founders Kealhofer,McQuown, and Vasicek, and was subsequently acquired by Moody’s and is nownamed Moody’s KMV. cf. www.moodyskmv.com.

416 QUANTITATIVE RISK MANAGEMENT

Page 436: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:15 Page 417

section 8.2.3). I do not set out to present a comprehensive review ofMKMV’s implementation; more detail can be found in these citations.

Unobservable Assets The probability of default is given by equation (11.7)and depends on the firm’s assets: the current level of overall assets (V0),the average growth rate (m), the volatility (s), and how much of assets arepromised as debt (B). equation (11.7) is reproduced here as (11.70). We willsee shortly that MKMV actually uses a somewhat different function thanequation (11.70), but the idea is the same, and default still depends on assets:20

PDefaultMerton ¼ EDFMerton ¼ Fln B=V0ð Þ � m� s2=2

� �

T

sffiffiffiffi

Tp

� �

ð11:70Þ

Equation (11.70) raises a problem because a firm’s overall level of assetsand the volatility of assets are generally not observed, and therefore (11.70)cannot be used directly.

Assets consist of the market value of equity plus debt (V0 ¼ B0 þ S0from Section 11.5). Market value of equity (S0) is usually available. Thebook value of debt (promised amounts, B earlier) is also usually available,but the market value (B0) rarely is. Debt is usually composed of bonds, bankloans, and other liabilities such as accounts receivables. Although somebonds may be traded, most debt is not, and because different classes of debtall have different seniority and payment provisions, it is generally not possi-ble to obtain market values for all the debt.

MKMV overcomes the problem by a rather neat trick. equation (11.5)shows that the terminal stock price is the payout for a call option, and sotoday’s price will be the current value of such a call. Continuing in the Mer-ton framework:

S0 ¼ CBSðt;V0; r;s;B;TÞ ¼ V0Fðd0;1Þ � Be�rTFðd0;2Þ ð11:8Þ

d0;1 ¼ ln V0=Bð Þ þ rþ s2=2� �

T

sffiffiffiffi

Tp

� �

d0;2 ¼ d0;1 � spT

20MKMV does not use the simple one-period Merton model but rather a continuoustime extension due to Oldrich Vasicek and Stephen Kealhofer (two founders of theoriginal KMV) known as the Vasicek-Kealhofer (VK) model. The firm’s equity is aperpetual option with the default point acting as the absorbing barrier for the firm’sasset value. The ideas are the same, however. Also note that instead of probability ofdefault, MKMV uses, and has trademarked, the term Expected Default Frequency(EDF).

Credit Risk 417

Page 437: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:15 Page 418

Both V0 and s are unknown, but S0 is known. If we knew s, then wecould use (6) to back out V0, or if we knew V0, we could back out s. Let usassume that s is stable over time, pick some initial guess s0, and apply equa-tion (11.8) over a period of history to back out a time-series of asset valuesV0

t . From this time series we can calculate a new guess for volatility, s1, andthen calculate a new time-series of asset values. This apparently leads to astable pair {s�, V�

t }. Using the s� we obtain, together with today’s S0, givesus V0 and the probability of default from equation (11.70).

Default Probability Function (Expected Default Frequency) Default occurs inthe Merton model when the random level of assets ~VT is below the defaultthreshold B at maturity T, as shown in equation (11.7) and Figure 11.10.The expression

ln V0ð Þ � ln Bð Þ þ mþ s2=2� �

T

sffiffiffiffi

Tp

� �

ð11:9Þ

that forms the argument of equation (11.7) provides a concise description ofwhat determines the likelihood of default.

MKMV does not use equation (11.9) exactly, but rather defines a varia-ble calledDistance to Default:

DD � V0 � B

sV0ð11:10Þ

Expressions (11.9) and (11.10) are actually closely related. Consider(11.9) for T ¼ 1: (11.10) is an approximation to (11.9) for T ¼ 1, since mand s2 are small and ln V0 � ln B � (V0 � B)/V0.

21 Both measure the dis-tance (measured roughly as a percentage) between the current asset level(V0) and a default threshold (B), scaled by the asset volatility. The distanceto default DD is often spoken of as the ‘‘number of standard deviationsaway from the default threshold B.’’

The exact form of the default relationship (11.7) in the Merton frame-work may be too simplistic for a working default model. MKMV has stud-ied many firms and observed that default occurred when the asset value wasless than the total liabilities, but more than short-term debt. In other words,default does not occur exactly when assets fall below total liabilities. Toaccount for this, MKMV has made two adjustments. The first is to take the

21 In practice, MKMV adjusts the expression (11.10) for time by including averagegrowth in assets (mT) and scaling volatility by

pT, so, in practice, the two formula-

tions are virtually the same.

418 QUANTITATIVE RISK MANAGEMENT

Page 438: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:15 Page 419

default threshold B as a combination of short-term and long-term debt;roughly debt to be serviced over the chosen horizon, plus one-half long-term debt.

The second, and more complex, adjustment is to dispense with the nor-mal distribution function F() as the function relating the distance to default(Expression (11.10) or (11.9)) to the probability of default. MKMV thenbuilds a function

EDFMKMV ¼ f½DD�

based on an empirical analysis of a large number of companies and manyevents of default. The relationship between default probability and distanceto default appears to be stable across industry, time, and size of company;that is, the distance to default is the important variable and differences indistance to default across industry, and so on, account for the observed dif-ferences in frequency of default. The function f[�] will obviously share im-portant characteristics with the normal distribution function F[�]: Inparticular, it will be increasing in the argument, and it will lie between zeroand one.

Factor Structure and Dependence across Firms The focus so far has been ondetermining the probability of default for a single firm in isolation. This isno small undertaking, and has addressed one major issue regarding the styl-ized model of Section 11.3: estimating the probability of default for eachfirm, based on specific information pertaining to that particular firm. Butthe dependence structure across firms is critical for understanding the P&Ldistribution for a portfolio, as argued in Section 11.3.

Extending the Merton model to two (or more) firms is straightfor-ward—each firm has its own asset value and its own promised bond pay-ment. Default still occurs when the asset value is below the promised bondpayment:

default of firm 1: ~V1

T < B1

default of firm 2: ~V2

T < B2

Default is governed by the bivariate random variable ~V1

T ;~V2

T

n o

. When

the asset values are independent, then the defaults are also independent, andwhen the asset values are correlated, then the defaults are correlated. TheMerton model provides a useful framework for thinking about the defaultcorrelation and dependence across firms: it is determined by correlationacross the firm asset values.

Credit Risk 419

Page 439: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:15 Page 420

When the asset values are positively correlated, then the probability ofjoint default is greater than the product of individual defaults:

P½V1T < B1 and V2

T < B2� > P½V1T < B1� � P½V2

T < B2�

and the default correlation is positive:

default correlation ¼ P½V1T < B1 and V2

T < B2� � p1p2ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

ðp1 � p21Þðp2 � p22Þq

(writing p1 for P½~V1

T < B1�).In Section 11.3, we had a very simple factor model for correlation

across critical variables in the form of a simple threshold model: equation(11.4), reproduced here:

Xi ¼ ffiffiffi

rp

F þffiffiffiffiffiffiffiffiffiffiffi

1� rp

ei ð11:4Þ

The Xi are critical variables for different firms i, with default occurringwhenXi < di. In the Merton model the random asset value at maturity ~V

i

T isthe critical variable, and the promised payment Bi is the critical value. Wecan generalize the simple factor structure in (11.4) by allowing F to be mul-tidimensional (instead of just a single F), and allowing each firm to have itsown sensitivity to factors (instead of the same r for all firms). An examplewith three factors would be:

~Vi

T ¼ aiF1 þ biF2 þ giF3 þ ei ð11:11Þ

In this example, factor F1 could be an overall macroeconomic factorsuch as GDP growth, while F2 and F3 could be industry-specific factors. Theei is a random variable independent of the Fi representing variability in thefirm’s asset level around the average level determined by the commonfactors.

In such an example, the macro factor would affect all companies. Mostfirms would do worse during a recession than an expansion; the coefficientai would be positive for most firms so that a low value for F1 would lowerthe critical variable ~V

i

T (making default more likely), and vice versa. Theindustry variables would affect only companies within the particular indus-try; for example, the coefficient b would be zero for a firm not in industryrepresented by F2.

The common factor structure of equation (11.11) can accommodate anarbitrary number of common factors (although, to be useful, the number of

420 QUANTITATIVE RISK MANAGEMENT

Page 440: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:16 Page 421

factors should be much less than the number of firms). It is not the onlycorrelation structure one could use, but it does have a number of benefits.First, it produces a parsimonious correlation structure, allowing relativelyflexible correlation across firms but with a small number of parameters. Sec-ond, it ensures that the dependence across firms and default probabilitiesarises solely from the dependence on the common factors: conditional on arealization of the factors Fi, defaults across firms are independent (remem-ber the assumption that the factors Fi are independent of the idiosyncraticcomponents ei).

The conditional independence of defaults has substantial benefits, asdiscussed further on and in McNeil, Frey, and Embrechts (2005, section8.4). In particular, it means that the threshold model can be recast as aBernoulli mixture model, with advantages in simulation and statisticalfitting.

For a practical application such as MKMV, the common factor struc-ture is slightly modified from equation (11.11). The firm asset return ratherthan the level is modeled, and the common factor structure is multilayered,depending on global, regional factors, country, and industry factors.Crouhy, Galai, and Mark (2000, chap. 9 section 7) discusses this in somedetail.

Cred i t M i gra t i on and Cred i tMe t r i c s

In firm-value threshold models such as Merton and MKMV, default is de-termined by an asset variable crossing a default threshold. An alternativeapproach is to estimate the default probability of a firm by an analysis ofcredit migration; that is, the migration of a firm through various credit-rating categories.22 This is the approach taken by CreditMetrics, a commer-cial product developed by JPMorgan and the RiskMetrics Group, and firstpublished in 1997.23

Single-Firm (Marginal) Migration Matrixes The goal of credit migrationmodeling is to understand a specific firm’s probability of default, and its

22We will see shortly, however, that a credit migration model can be reformulatedas a threshold model.23 See CreditMetrics—Technical Document, published originally by RiskMetrics in1999, republished 2007, at www.riskmetrics.com. Crouhy, Galai, and Mark (2000,ch. 8) discuss CreditMetrics in some detail. McNeil, Frey, and Embrechts (2005, sec-tion 8.2.4) have a concise discussion and also show how a credit migration modelcan be embedded in a firm-value model. Marrison (2002, ch. 18) discusses creditmigration and migration matrixes.

Credit Risk 421

Page 441: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:16 Page 422

relation to other firms’ default probability, using historical credit migrationdata. The focus is on the transition or migration matrix: First, a firm is cate-gorized according to some credit rating scheme, and then the probabilitiesof transiting between categories over some period (usually one year) is ap-plied to estimate the probability of default and changes in credit status. Thetransition matrix is usually estimated by measuring the behavior of a largepopulation of firms over a long period.

To start, and fix ideas, consider a trivial case with only two categories,solvent (not-in-default) and default. The migration is pretty simple—mov-ing from solvent to default. The credit migration matrix is trivial:

solvent ! solvent solvent ! defaultdefault ! solvent default ! default

� �

ð11:12Þ

with values that might be something like:

0:99 0:010:00 1:00

� �

This says that for a solvent firm, the probability of default over thenext year is 0.01, while the probability of staying out of default is 0.99.Once in default, a firm stays—the probability of moving out of defaultis zero.

This migration model, in fact, is exactly the default model of Section11.3, where all firms had the same probability of default, 0.01. Such a mi-gration model is easy to understand but not very useful. A major aim of anypractical credit risk model is to distinguish between firms and to differenti-ate risks: to arrive at firm-specific estimates of risk factors such as the prob-ability of default. A migration model that makes no distinction across firmsexcept in or out of default adds almost nothing toward the goal of estimat-ing firm-level parameters.

This simple migration model can easily be extended, however, simplyby categorizing the probability of default according to credit rating. Creditratings are publicly available for traded companies and are estimated byfirms such as Standard and Poor’s, Moody’s Investors Services, and FitchRatings. Quoting S&P, ‘‘Credit ratings are forward-looking opinionsabout credit risk. Standard & Poor’s credit ratings express the agency’sopinion about the ability and willingness of an issuer, such as a corpora-tion or state or city government, to meet its financial obligations in fulland o n time’’ (www.stand arda nd p oo rs.com/rating s/en/us/). A credit ratingcan be considered a proxy estimate of the probability of default and

422 QUANTITATIVE RISK MANAGEMENT

Page 442: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:16 Page 423

possibly severity of loss, estimated using objective data, subjective judg-ment, and experience.24

The preceding migration matrix could be extended by categorizingcompanies by credit rating (in this case, S&P ratings) and measuring theprobability of default conditional on the rating:25

solvent defaultAAA 1:000 0:000... ..

. ...

BB 0:989 0:011B 0:948 0:052CCC 0:802 0:198

0

B

B

B

B

B

B

@

1

C

C

C

C

C

C

A

ð11:13Þ

This matrix differentiates across credit ratings, showing that the one-year default probability for a BB-rated company is 0.011 (and the probabil-ity of not defaulting is 0.989). Assume for now that credit ratings do appro-priately differentiate across companies in terms of likelihood of default,then such a matrix goes far toward addressing the first risk factor identifiedin Section 11.3: estimating each firm’s probability of default. Assigning aprobability of default according to a borrower’s credit rating would makethe stylized model of Section 11.3 substantially more realistic.

The matrix 11.13 considers only migration into default, but there is farmore information available. Credit rating agencies take many years of dataon firms and ratings, observe the ratings at the beginning and end of theyear, and calculate the relative frequency (probability) of moving betweenratings categories. A firm rarely moves from a high rating directly to de-fault, but rather transits through intermediate lower ratings, and this infor-mation on transitions between ratings is usually used in an analysis of creditrisk. Table 11.4 shows a full transition matrix showing the probability ofmoving from the initial rating (at the beginning of the year) to the terminalrating (at the end of the year). For a firm initially rated AA, the probability

24Crouhy, Galai, and Mark (2000, ch. 7) and Crouhy, Galai, and Mark (2006, ch.10) provide a particularly detailed and useful explanation of public credit ratingsprovided by S&P and Moody’s, listing the ratings categories and definitions. Theyalso discuss internal ratings systems often used by banks or other financial institu-tions. Marrison (2002, ch. 19) discusses credit ratings. Information on ratings cate-go ries an d definitions c an also be found on ratings ag encies’ websites: www.standardandpoors.com/ratings/en/us/; www.fitchratings.com.25 From Standard & Poor’s CreditWeek (April 15, 1996) quoted in RiskMetrics(1997/2007).

Credit Risk 423

Page 443: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:16 Page 424

of being in default at the end of the year is essentially zero, but the probabil-ity of being downgraded to A is relatively high, at 7.79 percent.

Using a multistate transition matrix as in Table 11.4 has a few advan-tages. First, the probability of default can be estimated over multipleyears by straightforward matrix multiplication. When the initial state isrepresented by a vector with 1 in the appropriate column, and the migra-tion matrix in Table 11.4 is denoted M, then the probability of being incredit rating states after one year is (with MT denoting the matrix trans-pose of M):

S1 ¼ MT � S0

The probability after two and three years is:

S2 ¼ MT �MT � S0 S3 ¼ MT �MT �MT � S0

If the initial state is AA:

Initial State ¼ S0 ¼

01000000

0

B

B

B

B

B

B

B

B

B

B

@

1

C

C

C

C

C

C

C

C

C

C

A

S1 ¼

:0070:9065:0779:0064:0006:0014:0002:0000

0

B

B

B

B

B

B

B

B

B

B

@

1

C

C

C

C

C

C

C

C

C

C

A

S2 ¼

:0128:8241:1420:0157:0020:0028:0004:0002

0

B

B

B

B

B

B

B

B

B

B

@

1

C

C

C

C

C

C

C

C

C

C

A

TABLE 11.4 One-Year Transition Matrix

InitialRating

Terminal Rating (end of one year)

AAA AA A BBB BB B CCC Default

AAA 0.9081 0.0833 0.0068 0.0006 0.0012 0.0000 0.0000 0.0000AA 0.0070 0.9065 0.0779 0.0064 0.0006 0.0014 0.0002 0.0000A 0.0009 0.0227 0.9105 0.0552 0.0074 0.0026 0.0001 0.0006BBB 0.0002 0.0033 0.0595 0.8693 0.0530 0.0117 0.0012 0.0018BB 0.0003 0.0014 0.0067 0.0773 0.8053 0.0884 0.0100 0.0106B 0.0000 0.0011 0.0024 0.0043 0.0648 0.8346 0.0407 0.0520CCC 0.0022 0.0000 0.0022 0.0130 0.0238 0.1124 0.6486 0.1979

Source: Standard and Poor’s CreditWeek (April 15, 1996), quoted in RiskMetrics(1997) and by subsequent authors.

424 QUANTITATIVE RISK MANAGEMENT

Page 444: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:16 Page 425

Note that the one-year default probability is zero, but the two-yearprobability is 0.0002. This is because after one year there is some probabil-ity the firm will have fallen to A or BBB, and from there it has a positiveprobability of being in default a year later.

The second and even more important benefit is that changes in creditstatus beyond mere default can be modeled. For a firm rated AA, the proba-bility of default in one year is zero according to Table 11.4, but the proba-bility of moving to a lower rating such as A is relatively high—0.0779according to Table 11.4. The market value of liabilities for a firm that wasdowngraded from AA to A would certainly fall in value, and such a fallshould be taken into account when modeling credit risk.

The stylized model of Section 11.3 modeled default only and did notaccount for such changes in credit status. That was appropriate because weassumed the loans matured at the end of one year, so the only states theycould end in was default or full payment. In reality, the world is not so neatand the impact of changes in credit status, represented here by changes inrating, should generally be incorporated. Crouhy, Galai, and Mark (2000,ch. 8) and Marrison (2002, ch. 18) discuss the mechanics of modeling lossesincorporating credit migration beyond default.

Joint Probabilities, and Migration as Threshold Model The credit migrationframework has addressed probability of default, the first of the risk factorsdiscussed in Section 11.3. The second factor, correlation or dependenceacross defaults, is equally important but is not addressed by the migrationmatrix shown in Table 11.4. The matrix shows the marginal probability ofmigration; that is, the probability of a firm considered in isolation. If de-faults were independent (which they are not), then the joint probability oftwo firms migrating would be the product of the individual (marginal) prob-abilities. Consider two firms, one initially rated A and the other BB, and theprobability that after one year they both end up B-rated. The probability ifthey were independent would be:

P½ðfirm 1 goesA to BÞ and ðfirm 2 goes BB to BÞ assuming independence�¼ 0:0000304 ¼ 0:0026� 0:0117 ð11:14Þ

The joint probability will generally not be equal to the product becausemigration and default across firms will not be independent. One naturalmechanism creating dependence might be when firms in the same region orindustry are affected by common factors—all doing well or poorly to-gether—but we discuss that further on.

Credit Risk 425

Page 445: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:16 Page 426

For now, equation (11.14) helps explain why the historical data col-lected and published by rating agencies, which lend themselves naturally tothe creation of the marginal migration matrix in Table 11.4, are not as use-ful for analyzing the joint probability of migration and default. Even thoughnot perfectly accurate (events are not independent), equation (11.14) givesthe right order of magnitude for the joint probability. It is apparent thatjoint events such as described by (11.14) are rare, and so there will be veryfew to observe. Furthermore, there are a huge number of possible jointoutcomes.

Consider Table 11.4, for which there are eight categories, AAA throughdefault. For the single-firm (marginal) probability analysis, there are sevenstarting by eight ending, or 56 possibilities (the 56 entries of Table 11.4).For a joint two-firm analysis, there will be 49 starting categories (7 � 7 con-sisting of AAA&AAA, AAA&AA, AAA&A, . . . ) and 64 ending catego-ries, making 3,776 possibilities. Most of these possibilities will be quiterare, and it would take a huge sample of firms and a long period to obtainany solid estimates.

The solution chosen by CreditMetrics is to embed the credit migrationprocess in a threshold-model framework. Consider the simple two-statemigration matrix (11.12), with migration only from solvent to default. Thisis in fact equivalent to the Merton or KMV threshold model discussedearlier: default occurs when some random critical variable Xi falls below acritical threshold di:

default when Xi < di

Diagrammatically, this is shown by Figure 11.11, with a normally dis-tributed X and a critical threshold d, to the far left. The firm remains solventif the variable X is above the threshold d, and defaults if X is below d. This

XCritical variables

d

Critical threshold

Probability ofdefault

FIGURE 11.11 Critical Variable with Single (Default)Threshold

426 QUANTITATIVE RISK MANAGEMENT

Page 446: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:17 Page 427

matches the migration matrix in (11.12) as long as d is chosen so that thearea to the left of d is equal to the probability of default.

The way to generalize this and match a full migration matrix such asTable 11.4 should be clear. First, assume a critical variable for each initialrating, so we have a set of critical variables:

fXAAA;XAA; . . . ;XCCCg

Second, for each critical variable, choose a set of critical threshold lev-els, chosen to match the migration probabilities. For an A-rated company,this will be:

dA0 s:t: P½XA < dA

0 � ¼ P½A ! default�dA1 s:t: P½dA

0 < XA < dA1 � ¼ P½A ! CCC�

� � �dA5 s:t: P½dA

5 < XA < dA6 � ¼ P½A ! AA�

dA6 s:t: P½dA

6 < XA� ¼ P½A ! AAA�

Figure 11.12 shows what this will look like diagrammatically. Thethresholds are simply chosen to match the (marginal) migration probabilitiesin the migration or transition matrix such as Table 11.4.

The migration approach has now been embedded in a threshold frame-work. The joint probability of defaults is not modeled directly, but ratherindirectly through the mechanism of the underlying threshold variables, X.For practical application, the threshold variable is assumed to be the assetsof the firm, normally distributed, as in the Merton and KMV models. The

XAd0 d1 d2 d5 d6...

Critical variablefor A

Default threshold

Probability A→default

Probability A→CCC

Probability A→B

Probability A→AAA

FIGURE 11.12 Threshold Variable for Migration of A-rated Company, MultipleCritical Thresholds

Credit Risk 427

Page 447: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:17 Page 428

correlation in assets across firms then induces dependence in the defaults.For example, the joint probability of migration, from equation (11.14), nolonger assuming independence, will be:

P½ðfirm 1 goesA ! BÞ and ðfirm 2 goes BB ! BÞ assuming dependence�¼ P½ðXA

1 is between thresholds that determineA ! BÞ andðXBB

2 is between thresholds for BB ! BÞ�¼ P½ðdA

3 < XA1 < dA

4 Þ and ðdBB3 < XBB

2 < dBB4 Þ� ð11:140Þ

This is the joint probability for a bivariate normal, with the correlationof the asset variables XA

1 and XBB2 .

This is the standard approach for modeling the dependence or joint mi-gration probability for a migration model (the approach originally pro-posed in RiskMetrics 1997/2007): The model is embedded in a thresholdframework. Because CreditMetrics can be treated as a threshold model,and correlations are actually determined by a structural model of assetcorrelations across firms, it is classified as a structural model in the taxon-omy of Table 11.3.

The correlation in assets across firms is usually modeled using a com-mon factor structure, as in (11.11):

Xi ¼ biF þ ei ð11:15Þ

where Xi¼ threshold variable, usually asset return, for firm i (written as Vi

in (11.11) earlier)F¼ common factors (may be single or multiple factors)bi¼ firm i’s sensitivity to the factors (usually called factor loadings)ei¼ firm i’s idiosyncratic component contributing to asset variabil-

ity; independent of the factors F

With this structure, dependence across firms arises from the impact ofcommon factors represented by F. These might be observable macro-economic economic factors (such as GDP or aggregate unemployment) orindustry factors (such as whether a firm is a member of a particular indus-try). They could also be unobservable or latent factors that are sharedacross firms, say in the same region or industry. The factors are commonacross firms, although the response of individual firms might be different.(For example, an automaker might be hurt by low GDP growth or high un-employment as people buy fewer cars, while Wal-Mart might be helped aspeople switch to discount retailers.)

428 QUANTITATIVE RISK MANAGEMENT

Page 448: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:17 Page 429

MKMV and Cred i tMe t r i c s Compared

McNeil, Frey, and Embrechts (2005, section 8.2.4) compare some of thepros and cons of MKMV versus CreditMetrics’s credit migration approach.In brief:

& MKMV approach advantages& MKMV’s methodology should reflect changes more quickly than rat-

ings agencies. Rating agencies are slow to adjust ratings, so that thecurrent rating may not accurately reflect a company’s current creditstatus. Since credit migration modeling depends on appropriate cate-gorization of firms, this can lead to incorrect estimates of defaultprobabilities.

& MKMV’s expected default frequency (EDF) should capture depen-dence on the current macroeconomic environment more easily thanhistorical transitions, which are averaged over economic cycles.

& Credit migration approach advantages& Credit migration transition rates should not be sensitive to equity

market over- or under-reaction, which could be a weakness forMKMV’s EDFs.

& Credit ratings (either public or internal bank ratings) are often avail-able even for firms that do not have publicly traded equity. The origi-nal MKMV model was dependent on history for equity prices toestimate asset levels and volatilities, although MKMV has developedmethodologies for private companies.

11 .6 STAT I C R EDUCED FORM MODELS—CRED I TR I SKþThe threshold models from the previous section construct the defaultprocess from underlying financial and economic variables, and can thus becalled structural models. An alternative is simply to assume a form for thedefault distribution rather than deriving the parameters from first princi-ples, and then fit the parameters of the distribution to data. Such models aretermed reduced form models. The reduced form approach has some advan-tages: the default process can be flexibly specified, both to fit observed databut also for analytical tractability. CreditRiskþ, developed by Credit SuisseFinancial Products in the 1990s (see Credit Suisse Financial Products 1997)is an industry example of a reduced form model.

CreditRiskþ concentrates on default (not credit rating migration) andthe default and loss distribution for a portfolio. The mathematical form

Credit Risk 429

Page 449: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:18 Page 430

assumed for the individual-firm default process is reasonable and, impor-tantly, leads to a convenient and tractable default distribution. Unlike theMKMV and CreditMetrics approaches outlined earlier, which require time-consuming simulation, the CreditRiskþ model can be solved relatively eas-ily without simulation. This is a considerable advantage.26

The CreditRiskþ model focuses on two attributes of the defaultprocess:

1. Default rate or intensity, the analogue of probability of default in themodels discussed earlier.

2. Variability in default intensities, although it is really the common orcovariability in default intensities that matters because it is the depen-dence across firms (common variability generating default correlation)that is important, not idiosyncratic or firm-specific variability. The vari-ability in default intensities can also be expressed as the mixing ofunderlying distributions.

The outline presented here follows McNeil, Frey, and Embrechts (2005,section 8.4.2) rather than the original Credit Suisse Financial Products(1997). Although the original presentation is comprehensive, I find it some-what impenetrable, which is unfortunate because the techniques are souseful.

Po i sson Process , Po i sson M i x t ure , a nd Nega t i veB i n om i a l De f au l t D i s t r i bu t i o n

Default for a single firm is approximated as a Poisson random variable withintensity li. In reality, default is a Bernoulli variable, a variable that cantake the value zero (no default) or one (default). The Bernoulli variable canbe approximated, however, using a Poisson variable and there are substan-tial benefits to such an approximation.

A Poisson random variable is a counting variable that, in contrast to aBernoulli variable, can take values j ¼ {0, 1, 2, . . . }. When the event ofdefault is rare, as it usually will be, the Poisson process can provide a usefulapproximation to the default process. The value j counts the number ofevents during a period. We can identify no default with j ¼ 0, and defaultwith j 1. This leaves the possibility that j ¼ {2, 3, . . . }, but when default

26 It is interesting to consider that, since the default distribution in CreditRiskþ pro-vides a good approximation to that from MKMV and CreditMetrics, the distribu-tion and techniques used in CreditRiskþ could have wider applicability as acomputationally efficient method for solving credit risk models.

430 QUANTITATIVE RISK MANAGEMENT

Page 450: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:18 Page 431

is rare for any particular firm, the probability of multiple defaults for asingle firm will be very rare.

A Poisson variable is governed by an intensity l, and the probability forthe number of events j will be:

P Poission rv ¼ jjintensity l½ � ¼ exp �lð Þl�j

j!ð11:16Þ

For various values for the Poisson parameter l, the probabilities of zero,one, and two events will be as shown in Table 11.5.

We can see that when the probability of default is rare (as for every-thing except the lowest-rated issuers), the probability of multiple defaults isvery rare.

For the Poisson model, as for the Bernoulli model, we can define a ran-dom vector Y � ¼ ðY�

1; . . . ;YnÞ0 where Y�i now counts the number of events

or defaults (and we hope the number of multiple defaults for a single firmwill be low). We define the random variable M� ¼Pi Y

�i , which is now the

sum of the number of events. The sum M� will approximate the number ofdefaults when the intensity and the probability of multiple events are low.

The benefit of the Poisson framework versus the Bernoulli framework,and it is a substantial benefit, arises when considering a portfolio of multiplefirms. For firms that are independent, the sum of the Poissons acrossthe individual firms is itself Poisson. This means that the total number ofdefaults has a simple form:

Independence across Firms

P total defaults ¼ k½ � � P M� ¼ k½ � ¼ exp �X

n

i¼1

li

!

P

lið Þ�k

k!ð11:17aÞ

TABLE 11.5 Probability of Multiple Events for Poisson Random Variable withVarious Intensity Parameters

BBB B CCC

Intensity, l 0.00230 0.05296 0.26231Zero events 99.770% 94.970% 79.220%One event 0.230% 4.901% 18.454%Two events 0.000% 0.126% 2.149%

Note: The intensity parameters are chosen so that the probability of one or moreevents (default) matches the maximum likelihood estimates for default by S&P rat-ing fromMcNeil, Embrechts, and Frey (2005, table 8.8).

Credit Risk 431

Page 451: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:18 Page 432

Contrast with the Bernoulli process used in MKMV or CreditMetrics,where the total number of defaults will not have a simple form (for examplebinomial) unless all firms have the same default probability, a case thatnever occurs in practice.

In the real world, defaults across firms are not independent and soExpression (17a) cannot be used directly. The model can be expanded, how-ever, to allow a firm’s default intensity, li, to vary randomly, a function ofvariables F; in other words, li ¼ li(F). What is important here is not justthat each individual firm’s default intensity li be random, but that the de-fault intensities for firms vary together: the variables F must be commonacross firms.27,28 Now, if we impose the assumption that, conditional onthe realization of the common variables F, the Poisson processes acrossfirms are independent, then we can use

Conditional Independence across Firms

P total defaults ¼ kjF½ � � P M� ¼ kjF½ � ¼ exp �X

n

i¼1

li Fð Þ !

P

li Fð Þð Þ�k

k!

¼ PoiðLðFÞÞ ð11:17bÞ

27 In fact, not all firms have to depend on all the same variables, but there must besome common variables across some groups of firms.28 Credit Suisse Financial Products (1997, appendix A2) asserts that default ratescannot be constant. They point out that the standard deviation of observed defaultcounts (for a portfolio of firms) is higher than that predicted by a Poisson model with(independent) fixed default rates, what is called over-dispersion in the actuarial liter-ature. They then claim that the assumption of fixed default rates is incorrect anddefault rates must be variable. This is not the best way to state the issue. Over-dis-persion does indeed imply that default rates (in a Poisson model) cannot be constant,but variable default rates alone do not imply over-dispersion. The important issue isdependence versus independence of the default process across firms. Default ratevariability that is common across firms will produce dependence and over-dispersionin the default count distribution, but variability in default rates that is idiosyncraticor firm-specific quickly averages out and produces little over-dispersion for evenmodest-size portfolios. Also, it is important to carefully distinguish between defaultintensities (unobserved parameter of the Poisson process, also termed default rates)and default counts (observed counts of defaults), and Credit Suisse Financial Prod-ucts (1997) does not always do so. Default counts for a portfolio may be expressedas a percent of the number of firms in the portfolio and termed default rate, and suchobserved quantities may be used to estimate default intensity parameters, but countsand intensities are conceptually distinct.

432 QUANTITATIVE RISK MANAGEMENT

Page 452: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:19 Page 433

The distribution of total number of defaults conditional on F will bePoisson, and the unconditional distribution will be a mixture across thePoi(L(F)), mixing with the distribution of F. The variables F are commonfactors that affect some or all of the firms in a common way, in the samemanner as for correlation across assets in the threshold models. The F serveto mix independent Poisson distributions, resulting in a non-Poissondistribution.

The simplest example might be F ¼ f as a single macroeconomic factorrepresenting the state of the economy, with default intensity for firms higherwhen the state of the economy is low. Conditional on the state of the econ-omy, however, firms’ default processes are independent.

Expression (11.17b) is still the conditional probability, conditional onthe common factors F, whereas we need the unconditional distribution. TheF will be random variables for which we must choose a distribution, andthen take the expectation over the distribution of F. Assumptions for F thatproduce a gamma-distributed intensity l are particularly convenient,because then the unconditional distribution will be related to the negativebinomial distribution. When l is univariate gamma, then the unconditionaldistribution of the sumM� will be negative binomial.

To see how this works, take the case where F is univariate and the in-tensity for all firms is a linear function of a gamma-distributed f:29

li ¼ ki � f ; with f � Gaða; bÞ ð11:18aÞ

With this assumption,

Eðf Þ ¼ a=b; varðf Þ ¼ a=b2 ðfrom definition of the Gamma distributionÞEðliÞ ¼ ki a=b; varðliÞ ¼ k2i a=b

2 ðfrom 11:18aÞEðP liÞ ¼ Eðf ÞP ki ¼ ða=bÞ �P ki

varðP liÞ ¼ varðf ÞðP kiÞ2 ¼ ða=b2Þ � ðP kiÞ2

We can use these expressions (and the definition of the gamma) to seethat

P

li will be distributed Ga(a, b/P

ki). Now, according to McNeil,Embrechts, and Frey (2005, 357) and proposition 10.20, for (M� j f)

29 This follows the description of CreditRiskþ in McNeil, Frey, and Embrechts(2005 section 8.4.2), but simplified to a univariate factor.

Credit Risk 433

Page 453: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:19 Page 434

distributed Poisson andP

li distributed Ga(a, b), M� will be distributednegative binomial:

fM�jfg � PoiX

n

i¼1

li

!

andX

n

i¼1

li

!

� Gaða;bÞ ð11:18bÞ

) fM�g � Nbða;b=ð1þ bÞÞ ¼ Nbða; pÞ ð11:18cÞwriting p ¼ b =(1þb)

EðM�Þ ¼ að1� pÞ=p; VðM�Þ ¼ að1� pÞ=p2; Mode ¼ ½að1� pÞ � 1�=p:

This is written in terms of (a, b), the parameters of the distribution ofthe sum

P

li, what we could call the portfolio intensity gamma distribu-tion. It is also useful to express everything in terms of (a, b), the parametersof the gamma distribution of the factor f. This simply requires that we sub-stitute a ¼ a and b ¼ b/

P

ki ¼ p/(1 � p) or p ¼ (b/P

ki)/(1 þ b/P

ki) to get:

fM�g � Nbða; ðb=X

kiÞ=ð1þ b=X

kiÞÞ ð11:18dÞ

EðM�Þ ¼ aX

ki; =b; VðM�Þ ¼ aðX

ki; =bÞ2ð1þ b=X

kiÞ

This approach has huge benefits: A Gamma-Poisson mixture produces asimple form (negative binomial) for the distribution of portfolio defaults,M�. The default distribution is now a well-known distribution that can becalculated analytically rather than by time-consuming simulation.

De t a i l s o f Cred i t R i s kþ Assump t i ons

So now we turn to the specific assumptions for the CreditRiskþmodel.

& Default for an individual firm is approximated by a Poisson randomvariable.

& Default intensity of the Poisson process for an individual firm is li(F), afunction of the common variables F.

& Default, conditional on F, is independent across firms.& The distribution of F is gamma.

Specifically, the intensity for firm i is:

liðFÞ ¼ kiw0iF ð11:19Þ

434 QUANTITATIVE RISK MANAGEMENT

Page 454: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:20 Page 435

where ki¼ average default intensity (and approximate average defaultrate) for firm i

w0i ¼ p-dimensional vector of weights for firm i, (wi1, . . . , wip), with

the condition that the sum is one:P

jwij ¼ 1F¼ p-dimensional independent random vector, each element dis-

tributed Ga(aj, bj) (using McNeil, Frey, and Embrechts’s nota-tion for the parameters of the gamma distribution, so that E(Fi)¼ aj/bj, VarðFiÞ ¼ aj=b

2j ) and choosing aj ¼ bj ¼ 1=s2

j

These assumptions assure that E(Fj)¼ 1, varðFjÞ ¼ s2j and E(li(F))¼ ki�E

(w0i F) ¼ ki. In other words, the average intensity for firm i is ki. This is also

approximately the default probability. The default probability is given by

PðY� > 0Þ ¼ EðPðY� > 0jFÞÞ ¼ Eð1� expð�kiw0iFÞÞ � ki � Eðw0

iFÞ ¼ ki

These assumptions also ensure that the number of defaults, conditionalon F, is Poisson, by equation (11.17b). The elements of F are gamma-distributed, and we saw earlier that a gamma mixture of a Poisson is relatedto the negative binomial.

For F univariate, the intensity for all firms is:

li ¼ ki � f with f � Gað1=s2; 1=s2Þ

andP

li will be distributed Ga(1/s2, 1/(s2P ki)) (meanP

ki, variances2(P

ki)2). This will give:

M� � Nbð1=s21=ð1þ s2P

kiÞÞEðM�Þ ¼P ki varðM�Þ ¼P ki � ð1þ s2

P

kiÞ ð11:20Þ

As stated earlier, this is a huge benefit, sinceM� is a well-known distributionand can be handled without simulation.

When the common factors F are multidimensional (independent gam-mas), M� will be equal in distribution to the sum of independent negativebinomial random variables. The distribution will not be as simple as inthe univariate case but there are recursive formulae for the probabilitiesP(M� ¼ k) (see Credit Suisse Financial Products 1997; McNeil, Frey, andEmbrechts 2005, section 8.4.2; Panjer recursion in section 10.2.3).

De f au l t D i s t r i b u t i o ns f or Po i sson andNega t i v e B i nom i a l

Returning to the univariate case, we compare the mean and variance of thisnegative binomial (mixture) versus the Poisson with no mixing (s2 ¼ 0,

Credit Risk 435

Page 455: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:20 Page 436

no gamma mixing and no common variability in default intensities, whichmeans independence of default processes across firms). The mean is thesame:

P

ki. The variance of the Poisson isP

ki, while for the negative bino-mial, it is increased by the factor (1 þ s2Pki) and is also skewed.

Figure 11.13 shows the distribution for Poisson (no mixing) and nega-tive binomial (mixed Poisson, dependence across firms). The portfolio is for1,000 identical firms with parameter values that are representative of firmsrated single-B: E(li(F)) ¼ 0.05 (corresponding to roughly 0.05 probabilityof default) and s2 ¼ 0.22. The mean for both distributions is 50 defaults.The standard deviation for the Poisson distribution is 7.1 defaults and thedistribution is symmetric. The negative binomial is substantially spread outrelative to the Poisson (standard deviation more than three times higher at24.5 defaults), and substantially skewed.

The analytic results and simple distributions considerably simplify thecalculation of the default distribution and properties of the distributionsuch as VaR or economic capital.

Poisson—line

Poisson Mix—dash

20 40 60 80 100

0.02

0.04

FIGURE 11.13 Number of Defaults for a Portfolio of1,000 Homogeneous Loans—Alternate DependenceAssumptionsNote: This is the number of defaults from holding aportfolio of 1,000 homogeneous loans, calculatedusing a Poisson default model, each firm with averagedefault intensity of 0.05. The Poisson Mixture is amixture of Poisson-intensities with identical intensi-ties l ¼ 0.05, univariate mixing variable f � Ga(a ¼1/s2, b ¼ 1/s2), s2 ¼ 0.22, producing a negative bi-nomial distribution, Nb(1/s2, 1/(1 þ 50s2)). Thiscorresponds to pairwise default correlations betweenfirms of 0.012.

436 QUANTITATIVE RISK MANAGEMENT

Page 456: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:21 Page 437

I n t e ns i t y Vo l a t i l i t y and De f au l t C orre l a t i on

Credit Suisse Financial Products (1997) takes the fundamental parametersof the model to be mean and standard deviation of the random default in-tensity li(F), and calibrates these against observed data. (Remember that forthe univariate case li ¼ ki�f and f � Ga(1/s2, 1/s2), so the mean is ki and thestandard deviation is kis.) It is pretty straightforward to calibrate or esti-mate the mean of li from observables. Ratings agencies follow firms andreport the number of defaults from the pool of followed firms. The numberof defaults (Mt) and number of firms that are being followed (mt) are availa-ble annually by ratings category. From this, it is easy to calculate the aver-age observed default rate (separately for each ratings category):

average default rate ¼ p� ¼ ð1=TÞX

tðMt=mtÞ ð11:21Þ

This observed default rate is an estimate of the mean default probabilityand can be equated to the mean of li since for the Poisson model the meanof li is approximately the mean default probability.

The standard deviation is not so straightforward. It is important here todistinguish between default intensity (with an assumed distribution whosestandard deviation is a parameter of the model) and the observed or finite-sample default rate (which will have a finite-sample distribution with somedifferent standard deviation). The distinction is important but somewhatsubtle. Consider the case of n identical firms, each with fixed Poisson inten-sity, l (in other words, the standard deviation of the default intensity distri-bution is zero). The Poisson intensity for the collection of firms will be nland the standard deviation of the default count will be

ffiffiffiffiffiffiffiffiðnlp

). The observedor finite-sample default rate is the count divided by n, and it will have astandard deviation of

ffiffiffiffiffiffiffiffiffiffiðn=lp

). In other words, even when the intensity isconstant (standard deviation of the default intensity distribution is zero) theobserved average default rate standard deviation will be positive because ofrandom sampling variability.30

30Credit Suisse Financial Products (1997) and other authors do not always distin-guish between the default intensity (which is a parameter of the model or an input)and the finite-sample default rate (which is a finite-sample statistic of the model oran output) and this can lead to confusion. Crouhy, Galai, and Mark (2000) is anexample. On pp. 405–406 (using table 8.3, 326) they claim that the standard devia-tion of the observed default rate is higher than would be implied by a Poisson pro-cess with fixed intensity. While their conclusion may be right for other reasons, theiranalysis is wrong, with two fundamental flaws. First, there is an outright

Credit Risk 437

Page 457: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:21 Page 438

The standard deviation of the random default intensity li(F) can beextracted from the standard deviation of observed default rates p� but to doso is not trivial. As just argued, finite-sample (observed) default rates willfluctuate because of sampling variability even if the default intensity is con-stant. This finite-sample variability will vary with n or

pn (sample size).

Observed default rates will also fluctuate if the default intensity is random,and this will not vary with n in the same way. The trick is to distinguishbetween the two.

For ease of illustration, assume that the number of firms being followedeach year is the same: mt ¼ n. When the intensity is constant (the same forall firms, li(F) ¼ k ¼ constant) the distribution of the count Mt will bePoisson and the variance of the observed average default rate p� will be k/n.This expression goes down as n increases.31 In contrast, for li(F) distributedgamma (li ¼ ki�f and f � Ga(1/s2, 1/s2)), the variance of the observed

computational error. For single-B obligors, from their table 8.3, 326, E(default rate)¼ 7.62 percent ¼ 0.0762. If this were the fixed Poisson intensity l, then

pl ¼p

(7.62 percent) ¼ p0.0762 ¼ 0.276 ¼ 27.6 percent; they claim instead

pl is 2.76

percent. (Their mistake is in takingp7.62 ¼ 2.76 and applying the percentage oper-

ation outside the radical rather than inside. This elementary error is not, however,representative of the overall high quality of Crouhy, Galai, and Mark [2000].) Sec-ond, and more subtly, they compare the standard deviation of finite-sample defaultrates (a finite-sample statistic) to the standard deviation of the Poisson intensity l (aparameter of the model). For a single firm with Poisson-distributed defaults andfixed intensity l, the standard deviation of the count (number of defaults) is

pl. For

n identical firms, it isp(nl). The standard deviation of the observed default rate

(count divided by sample size n) isp(l/n). Because the observed default rate is a

finite-sample statistic, its standard deviation will vary with n (here as 1/pn) as for

any finite-sample statistic. The bottom line is that the standard deviation of the ob-served finite-sample default rate is not

pl. Table 8.3 does not give the sample size

and we therefore cannot calculate what would be the standard deviation of the fi-nite-sample default rate for a Poisson model; their comparison is meaningless. (Asan exercise, we can calculate what the standard deviation of the finite-sample defaultrate would be for various sample sizes. For single-B obligors, from their table 8.3,326, E(default rate) ¼ 0.0762. For a sample of 20 and a Poisson with fixed intensityl ¼ 0.076, the standard deviation of the default rate would be

p(0.0762/20) ¼

0.062, while for a sample of 100, it would bep(0.0762/100) ¼ 0.0276. The ob-

served standard deviation of the finite-sample default rate is actually 0.051 [again,from their table 8.3]. This tells us that for a sample size of 20, the observed standarddeviation would be too low relative to a Poisson with fixed intensity, while for asample of size 100, it would be too high. Without knowing the sample size, however,we cannot infer whether 0.051 is too high or too low.)31The variance of the count is kn, so the variance of the default rate is kn/n2 ¼ k/n.

438 QUANTITATIVE RISK MANAGEMENT

Page 458: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:21 Page 439

default rate is k/n þ k2s2; the second term in the expression does not godown as n increases.32

Figure 11.14 demonstrates how the volatility of observed default ratesfalls as the number of firms in the sample increases. Panel A is fixed inten-sity, an unmixed Poisson distribution, for 100 and 1,000 firms. For fixedintensity, the width of the observed average default rate distribution shrinkssubstantially as the number of firms rises from 100 to 1,000; volatility goeslike

p(k/n). Panel B is variable intensity, a mixed Poisson with intensity

gamma-distributed (l ¼ k�f ¼ 0.05�f and f�Ga(1/s2, 1/s2)), producing anegative binomial. Here, the width of the default rate distribution does notshrink very much as the number of firms rises from 100 to 1,000 becausethe volatility behaves like

p(k/n þ k2s2) and the term k2s2 dominates. The

bottom line is that the standard deviation of observed default rates must beused carefully to estimate the standard deviation of the default intensity.Gordy (2000) discusses estimation of the standard deviation and McNeil,Frey, and Embrechts (2005, section 8.6) discuss estimation more generally.

The usual approach for CreditRiskþ is to calibrate the standard devia-tion of the intensity using the standard deviation of observed default rates.Alternatively, one could calibrate against the observed pairwise default cor-relations. The default correlations are a fundamental aspect of the creditrisk problem, and focusing on default correlations makes this explicit. It isthe common or covariability of default intensities across firms that is impor-tant in producing asymmetric default distributions. (It is possible to show,by simulation, that idiosyncratic variability in default intensity does nothave an impact on overall portfolio variability as the portfolio grows.) Sinceit is the covariability that matters, focusing specifically on default correla-tions seems to be appropriate, particularly when modeling individual firmrelationships. Nonetheless, since default is rare and joint default doublyrare, calibrating against default correlations can be difficult.

To calculate the (approximate) pairwise default correlation, rememberthat Y�

i counts the number of events and the event of default is Y� > 0. Forthe case of univariate common variables, f, default probability is given by:

PðY� > 0Þ ¼ EðPðY� > 0Þjf Þ ¼ Eð1� expð�kf ÞÞ � k Eðf Þ ¼ k

32The variance of the count is nk � (1 þ s2nk), so the variance of the observed rate isk/nþk2s2, cf. Equation (11.21). Ignoring the finite-sample nature of the observeddefault rates does make a difference. Data in table 8 of Gordy (2000) derived fromStandard and Poor’s published data show that for single-B issuers, the average num-ber of issuers is about 240, the average default rate is about 0.0474, and the varianceof observed default rate is about 0.000859. This implies k/n is about 0.000195, k2s2

about 0.000664, implying s2 � 0.296. Ignoring the k/n term would give s2 � 0.382.

Credit Risk 439

Page 459: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:21 Page 440

A. Poisson—Fixed Intensity

Poisson 100

Poisson 1000

0.04 0.08 0.12

B. Poisson Mixture—Variable Intensity

Poisson Mixture 100

Poisson Mixture 1000

0.04 0.08 0.12

FIGURE 11.14 Decrease in Width of Distributionof Observed Default Rates as Sample SizeIncreases, Fixed versus Variable Default IntensityNote: Distribution of observed default rates(Equation 11.21). Panel A is for default countPoisson-distributed with constant default intensityl ¼ k ¼ 0.05; the variance of the default rate distri-bution is k/n. Panel B is for default count negativebinomial–distributed, a mixture of Poissons, withintensity l ¼ k�f ¼ 0.05�f and f�Ga(1/s2, 1/s2),s2 ¼ 0.22. The variance of the default rate distri-bution is k/n þ k2s2.

440 QUANTITATIVE RISK MANAGEMENT

Page 460: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:22 Page 441

Joint default probability is:

PðY�i > 0 andY�

j > 0Þ ¼ EðPðY�i > 0 and Y�

j > 0Þjf Þ¼ Eð1� expð�kif ÞÞð1� expð�kjf ÞÞ

� ki � kjEðf 2Þ ¼ ki � kj½Vðf Þ þ Eðf Þ2� ¼ ki � kj � ðs2 þ 1Þ

The default correlation will be (approximately)

Default Correlation � ½ki � kj � ðs2 þ 1Þ � ki � kj�=½pððki � k2i Þ � ðkj � k2j ÞÞ�¼ ½ki � kj � s2�=½pððki � k2i Þ � ðkj � k2j ÞÞ�

ð11:22Þ

The mixing by the gamma variable f both increases the variance of thetotal default distribution (to

P

ki(1 þ s2Pki) fromP

ki) and induces corre-lation across defaults (approximately [ki�kj�s2]/[

p((ki � k2i )�(kj � k2j ))] ver-

sus zero). We can view the gamma mixing as either increasing the varianceor creating correlation—they are equivalent and both are valid.

Specific Factor As mentioned at the beginning, the outline presented herefollows McNeil, Frey, and Embrechts (2005, section 8.4.2), which presentsthe model as a mixture of Poissons. I believe this approach simplifies theexposition. For example, Credit Suisse Financial Products (1997) introducesa specific factor (appendix A12.3). For the mixture of Poissions outline pre-sented here, the default intensity for a given firm i is the sum over gamma-distributed variables, indexed by j. Repeating equation (11.19) from before:

liðFÞ ¼ kiw0iF ¼ ki

X

p

j¼1

wijf j ð11:19Þ

where ki¼ average default intensity (and approximate average defaultrate) for firm i

wij¼weights for firm i, applied to common factor j, with the condi-tion that the sum is one:

P

jwij ¼ 1fj¼ independent random variables distributed Ga(aj, bj) (using

McNeil, Frey, and Embrechts’s notation for the parameters ofthe gamma distribution, so that E(Fi) ¼ aj/bj, VarðFiÞ ¼ aj=b

2j )

and choosing aj ¼ bj ¼ 1=s2j

A specific factor corresponds to defining f0 as a constant, that is, a de-generate gamma variable with s0 ¼ 0. In the case of a single common factor

Credit Risk 441

Page 461: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:23 Page 442

(no degenerate factor), the mean and standard deviation of the intensity arein a fixed ratio for all levels of default intensity:

li ¼ ki � f ; with f � Gað1=s2; 1=s2Þ ) EðliÞ ¼ ki; varðliÞ ¼ k2i s2

By introducing the degenerate f0, this ratio can vary across levels ofintensity:

li ¼ ki � ðwi0 þwi1f 1Þ; with f 1 � Gað1=s2; 1=s2Þ;wi0 þwi1 ¼ 1 ) EðliÞ¼ ki; varðliÞ ¼ k2i w

2i1s

2

The resulting unconditional distribution for the default count M� willnow be the convolution of a Poisson (intensity

P

iwi0ki) and a negative bi-nomial (M� � Nb(1/s2, 1/(1 þ s2P

iwi1ki)), E(M�) ¼ P

iwi1ki, var(M�) ¼

P

iwi1ki�(1 þ s2P

iwi1ki)). The convolution makes the distribution slightlymore difficult computationally than the negative binomial with no constantwi0, but still orders of magnitude less computationally intensive than simu-lation as for the Bernoulli case. We will see that the introduction of the con-stant wi0 will be important in fitting to observed data.

L oss D i s t r i b u t i o n

The discussion so far has covered only the default distribution. Losses de-pend on both the event of default and the loss given default:

Loss ¼ 0 if no defaultLGD if yes def ault

� �

The loss given default depends on the exposure and the recovery upondefault:

Loss Given Default ¼ Exposure � ð1� RecoveryÞ

In the CreditRiskþ model, the exposure and recovery are subsumedinto the loss given default, which is treated as a random variable. The lossdistribution will be the compounding of the default distribution and the dis-tribution of the loss given default. This is discussed in detail in Credit SuisseFinancial Products (1997). The loss distribution will be a compound distri-bution (see McNeil, Frey, and Embrechts, 2005, section 10.2.2). For theassumptions in the CreditRiskþ model, where the default distribution is aPoisson mixture, the loss distribution will be a compound mixed Poisson

442 QUANTITATIVE RISK MANAGEMENT

Page 462: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:23 Page 443

distribution, for which there are simple recursion relations, detailed inCredit Suisse Financial Products (1997).

11 .7 STAT I C MODELS—THRESHOLD ANDM IXTURE FRAMEWORKS

Thresho l d and Bernou l l i M i x t ure Mode l s

The static (fixed time period) structural models discussed in Section 11.5were formulated as threshold models: default (or ratings transition) occurswhen a critical variable X crosses below a critical threshold d. Joint defaultfor two firms is determined by the joint probability that both threshold var-iables are below their respective critical thresholds:

P½firm 1 and 2 both default� ¼ P½X1 < d1 and X2 < d2�

In many cases, theXi are assumed jointly normal so that this is a statementabout a bivariate (or for more than two, multivariate) normal distribution.

When the threshold variables are formulated using the common factorstructure of (11.15), the model can alternatively be represented as a Ber-noulli mixture model. Bernoulli mixture models have a number of advan-tages, particularly for simulation and statistical fitting (cf. McNeil, Frey,and Embrechts 2005, section 8.4).

The definition for the common factor structure is equation (11.15),reproduced here.

Xi ¼ biF þ ei ð11:15Þ

Conditional on F, the threshold variables Xi are independent becausethe ei are independent. This means the joint default process is independent,conditional on F:

P½firm 1 and 2 both defaultjF� ¼ P½X1 < d1 and X2 < d2jF�¼ P½b1F þ e1 < d1 and b2F þ e2 < d2jF�¼ P½e1 < d1 � b1F and e2 < d2 � b2FjF�¼ P½e1 < d1 � b1FjF� � P½e2 < d2 � b2FjF�¼ F½ðd1 � b1F � m1Þ=s1� �F½ðd2 � b2F � m2Þ=s2�

¼ p1ðFÞ � p2ðFÞ

where the final-but-two equality follows because e1 and e2 are conditionallyindependent.

Credit Risk 443

Page 463: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:24 Page 444

The upshot is that the probability of default is independent across firms(conditional on F), with the probability for each firm being a function pi(F).For the preceding threshold models, the function p is the normal CDF:

piðFÞ ¼ F½ðdi � biF � miÞ=si� ð11:23Þ

but other choices are equally good—see McNeil, Embrechts, and Frey(2005, 354).

The important point is that, conditional on the common factors F, eachfirm’s default is an independent Bernoulli trial with probability given bypi(F). As a result, working with the distribution of defaults for a portfoliobecomes more straightforward. For simulation, this boils down to the fol-lowing: instead of generating a high-dimensional multivariate distribution{X1, . . . , Xn}, we generate a univariate F and then perform independentBernoulli trials (by generating independent uniform random variates).

We can define a random vector Y ¼ (Y1, . . . , Yn)0 where Yi ¼ 1 means

firm i has defaulted and Yi ¼ 0 means it has not defaulted. We can define therandom variable M ¼PiYi, which is the sum of the Yi, that is, the numberof defaults. If all the firms are identical so that all the pi are the same, sayp�(F), then the distribution of the sumM (conditional on F) will be binomialand the probability of k defaults out of n firms will be:

P½M ¼ k� ¼ P½k defaults out of n� ¼ nk

� �

p Fð Þk 1� p Fð Þð Þn�k ð11:24aÞ

In the general case, each pi(F) will be different. We can define a vectory ¼ (y1, . . . , y1)

0 of zeros and ones to represent a particular configurationof defaults, and the probability of such a configuration (conditional on F) is:

P½Y ¼ yjF� ¼Y

ipiðFÞyið1� piðFÞÞ1�yi ð11:24bÞ

This is a sequence of Bernoulli trials; each firm is subject to a Bernoulli trialdetermining whether it is in default or not. The total number of defaults,M,will now be a sum of Bernoulli rvs (still conditional on F) but each withpotentially different pi. This will not be binomial and does not have anysimple distribution.33

33This is the reason for using Poisson models, because the sum of Poissons does havea simple distribution. Nonetheless, since the pi will all be roughly the same size (allsmall since the probability of default is low) the distribution will tend toward nor-mal as n gets large (by the law of large numbers).

444 QUANTITATIVE RISK MANAGEMENT

Page 464: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:24 Page 445

To complete the Bernoulli mixture framework requires a distributionfor the random variables F. The MKMV and CreditMetrics models consid-ered earlier were originally formulated as threshold models with commonfactors F normally distributed. Now we are treating them as Bernoulli mix-ture models and the F is the mixing distribution, which is normal (possiblymultivariate normal). Normal is the usual choice but not the only choice.

The unconditional distribution is found by integrating (11.24b) overthe distribution of F. This is now a mixture of Bernoulli processes, with Fserving as the mixing variable. The distribution for the total number of de-faults, M, will not tend to a normal, and as seen in Section 11.3 will cer-tainly not be symmetric. (See McNeil, Frey, and Embrechts [2005, section8.4] for a complete discussion.)

The mixing produces dependence across defaults. The conditional de-fault probability is pi(F), given by (11.23) for the preceding threshold mod-els. The fact that firms’ default probabilities share the common variables Fproduces dependence. Say that F is univariate and that all bi are the same,b> 0. Then when F is below average, it will affect all firms in the same way,and default for all firms will be higher. This is dependence across firms be-cause the joint probability of default is higher when F is below average, andlower when F is above average.34 The strength of the dependence dependson the variance of F relative to e and the size of b and s.

One significant benefit of working in the Bernoulli mixture frameworkis in simulation. If we knew the common factors F, then simulating the pro-cess would be very simple:

1. Determine the probability of default for each firm i, pi(F), as a function ofF. In many applications, the function is pi(F)¼ F [ (di � biF � mi)/si ].

2. Perform a sequence of Bernoulli trials: for each firm, draw an iid uni-form random variate, and compare to pi(F); the firm is in default or notdepending on whether the uniform rv is above or below pi(F).

In fact, we do not know the value of F but simulating the unconditionalprocess is only slightly more complex. All that is required is that, for eachtrial, we first draw a random realization for F. The Fwill generally be multi-variate normal, but with dimension far lower than the number of firms.(F might be on the order of 10 dimensions, while there can easily be

34This is a single common factor that affects all firms in exactly the same manner.An example might be a recession that makes the business conditions worse and in-creases the probability of default for all firms. In general, there can be more than onefactor and the bi can be different across firms, so that some firms could be positivelyaffected, others negatively affected, and some not affected at all.

Credit Risk 445

Page 465: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:24 Page 446

thousands of firms. In contrast, working in the threshold framework meanssimulating a multivariate normal with dimension equal to the number offirms, making simulation computationally more difficult.)

The simulation scheme is a slight extension of that shown earlier:

1. Draw a realization for F.2. Determine the probability of default for each firm i, pi(F), as a function

of F. In many applications, the function is pi(F) ¼ F [(di � biF � mi)/si].3. For each firm, draw an id uniform random variate, and compare versus

pi(F); the firm is in default or not depending on whether the uniform rvis above or below pi(F).

Most implementations of threshold models (and in particular MKMVand CreditMetrics) can be formulated as Bernoulli mixture models becausethe correlation across firms is modeled using a common factor structure asin equation (11.15). Writing the model’s stochastic structure as a Bernoullimixture model simplifies thinking about how and why the default distribu-tion behaves as it does.

Another important implication of the Bernoulli mixture approach isthat under this framework average default rates will vary over time as Fvaries. Consider a homogeneous pool of firms or loans and a conditionaldefault probability given by equation (11.23) with X following the equicor-relation structure as given in 11.4:

Xi ¼ ffiffiffi

rp

F þffiffiffiffiffiffiffiffiffiffiffi

1� rp

ei ðF and ei � Nð0; 1ÞÞ

Conditional on a realization of F, defaults will be binomial with meandefault rate p(F) ¼ F[(d � F

pr)/

p(1 � r)]. The median will be F [d/

p(1 �

r)], and the �1s values will be F[(d � pr)/

p(1 � r)]. Say we are consider-

ing default over a one-year period. Then in any given year, the default distri-bution will be binomial, but from one year to the next, the default rate willvary, and when considering the distribution over multiple years, the distri-bution will be skewed.

One final note concerning Bernoulli mixture models. The thresholdmodels considered earlier assume the probability of default pi(F) dependson F through the normal CDF F, as in equation (11.20). Alternativeassumptions could be used, and are discussed further on.

Po i sson M i x t ure Mode l s

The Bernoulli mixture framework is very useful, but as discussed in Section11.6 with reference to the CreditRiskþ model, it can be convenient to

446 QUANTITATIVE RISK MANAGEMENT

Page 466: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:24 Page 447

model the event of default by a Poisson rather than a Bernoulli random vari-able. This is an approximation, but a very useful and convenient one. Theconvenience arises because the sum of independent Poisson random varia-bles remains Poisson, while the sum of Bernoulli variables does not haveany simple distribution.35 The total number of defaults over a period is thesum of the individual firm default variables, so when default is modeled byindependent Poisson variables, the total number of defaults is immediatelyavailable as a Poisson distribution.

Unconditional independence across firms is not a realistic assumption,but as with Bernoulli mixture models, it is often reasonable to assume thatdefault processes are independent when conditioning on some set of ran-dom variables F. When default is modeled by conditionally independentPoisson variables the sum or total number of defaults, conditional on F, willbe Poisson. The unconditional default distribution is the integral over thedistribution of F; in other words, a Poisson distribution mixed with F.When F is gamma-distributed, the resulting distribution with be a gamma-Poisson mixture, which is negative binomial.

The CreditRiskþ model of Section 11.6 was presented as a gamma-Poisson mixture model. Firm default intensity is assumed to be (repeatingequation (11.19)):

liðFÞ ¼ kiw0iF ð11:19Þ

withP

jwij ¼ 1 and F an independent multivariate gamma. Conditional onF, firms’ default processes are independent. The analytic and semianalyticresults for the gamma-Poisson mixture considerably simplify the calculationof the default distribution and properties of the distribution such as VaR oreconomic capital.

One fruitful way to view the Poisson mixture is as an approximationto a Bernoulli mixture, an approximation that is computationally tracta-ble. The distribution for a Bernoulli and Poisson mixture model are quitesimilar, given appropriate choices of parameters. Consider the one-factorBernoulli mixture model from Section 11.3 (although it was discussedthere as a threshold model, it can also be treated as a Bernoulli mixturemodel).

Reasonable parameters for a Bernoulli mixture model of identicalsingle-B-rated issuers would be average probability of default ¼ p� ¼ 0.05

35Unless all firms have the same default probability, in which case the distribution isbinomial, but this will never be the case in practical applications.

Credit Risk 447

Page 467: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:25 Page 448

and threshold variable correlation (assuming the equicorrelation structureof equation (2)) r ¼ 0.05. This will produce a default correlation of 0.012.Matching parameters for a gamma-Poisson mixture model would bel ¼ 0.05 and s2 ¼ q ¼ 0.22. For a portfolio of 1,000 firms, this gives M� �Nb(10, 0.0833), mean of 50, standard deviation of 24.5, and pairwise de-fault correlation of 0.012.

The simulated Bernoulli mixture and the analytic negative binomial dis-tributions are very close. Both distributions have mean 50, default correla-tion 0.012, standard deviations of 24.5 (Poisson mixture) and 24.7(Bernoulli mixture), and 1%/99% VaR of �$41,000 (Poisson mixture) and�$43,000 (Bernoulli mixture), compared with �$9,300 for the unmixeddistribution with no correlation. Figure 11.15 shows both the unmixed dis-tributions (binomial and Poisson) and the mixed distributions. They areshown separately (the binomial/Bernoulli mixture in Panel A and thePoisson mixture/negative binomial in Panel B) because they would be virtu-ally indistinguishable to the eye if drawn in the same chart. Furthermore,the Bernoulli and Poisson mixtures are close for small portfolios as wellas large.

Genera l i z e d L i near M i xed Mode l s

Both the Bernoulli and the Poisson mixture models discussed so far fit underthe generalized linear mixed models structure (see McNeil, Frey, andEmbrechts 2005; McCullagh and Nelder 1989). The three elements of sucha model are:

1. A vector of random effects, which are the F in our case.2. A distribution from the exponential family for the conditional distribu-

tion of responses. In our case, responses are defaults (either Yi for theBernoulli or Y�

i for the Poisson). The defaults are assumed independentconditional on the random effects F. The Bernoulli, Poisson, and bino-mial distributions are from the exponential family.

3. A link function h() linking the mean response conditional on the ran-dom effects, E(Yi j F), to a linear predictor of the random effects m þx0ib þ F. That is, a function h() such that E(Yi j F) ¼ h(m þ x0ib þ F).Here, the x0

i represent observed variables for the ith firm (such as indi-cators for industry or country, or balance sheet or other firm-specificfinancial measures) and m and b are parameters.

Table 11.6 shows various Bernoulli and Poisson mixture models.The probit-normal and the gamma-Poisson are used in commercial prod-ucts as noted.

448 QUANTITATIVE RISK MANAGEMENT

Page 468: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:25 Page 449

A. Bernoulli

IndependentBinomial

CorrelatedMixture

20 40 60 80 100

0.02

0.04

B. Poisson

Poisson—line

Poisson Mix—dash

20 40 60 80 100

0.02

0.04

FIGURE 11.15 Comparison of Poisson and BernoulliMixture Distributions—Portfolio of 1,000 FirmsNote: This shows the default distribution for 1,000identical loans. For the Bernoulli distributions (PanelA), the probability of default is p ¼ 0.05, while forPoisson distributions (Panel B), the intensity is l ¼0.05. The Independent Bernoulli and Poisson distribu-tions are unmixed. The Bernoulli Mixture is mixedwith a normal (probit-normal) using an equicorrela-tion structure with r ¼ 0.05 (see 11.3). The PoissonMixture is mixed with a Ga(1/0.22, 1/0.22), whichproduces a Nb(1/0.22, 1/(1 þ 0.22�0.05�1,000)). Alldistributions have mean 50, the mixtures both havepairwise default correlation 0.012, the standard devi-ation of the Poisson mixture is 24.5 and the Bernoullimixture is 24.7.

Credit Risk 449

Page 469: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:25 Page 450

TABLE11

.6VariousM

ixture

Models

Name

Probit-norm

al

Logit-norm

al

Beta

Gamma-Poisson

Type

BernoulliM

ixture

BernoulliM

ixture

BernoulliM

ixture

PoissonM

ixture

Poisson

Mixture

Random

Effects

(mixingvariable)

F�

Norm

al

(0,s2)

F�

Norm

al

(0,s2)

F�

Beta

(a,b)

F�

Gamma

(1/s

2,1/s

2)

F�

Norm

al

(0,s2)

Distributionfor

conditionalfirm

defaults

Bernoulli

Bernoulli

Binomial

Poisson

Poisson

Distributionfor

unconditional

defaultcount

Nonanalytic—

simulation

required

Nonanalytic—

simulationrequired

Beta-Binomial

NegativeBinomial

andrelated

Nonanalytic—

simulation

required

LinkFunction

Norm

alCDF:

pi(F)¼

F(m

þF)

LogisticCDF:pi(F)¼

G(m

þF),G(x)¼

(1þ

exp(�

x))�1

Linear:pi(F)¼

FLinear:li(F)¼

kiw

0 iF¼

ki(w

i1

þP

jwijf j)

Exponential:

li(F)¼

exp(m

þF)

Usedin

commercial

product

Threshold

Model

(MKM

Vor

CreditM

etrics)

CreditPortfolioView

(from

McK

insey,

butapparentlynot

currentlyanactive

product)

CreditRiskþ

Notes

ForM

KM

V,the

linkfunctionis

notF(�)

buta

proprietary

functionh(�)

Moments(portfolio

size

n)for

univariate

F

Mean¼

n�a

/(aþ

b),

var¼

n�a

�b�(n

þaþ

b)/[(aþ

b)2�(1

þaþ

b)]

Mean¼P

kivar

¼P

ki�(

s2P

ki)

450

Page 470: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:26 Page 451

Parame ters f or Bernou l l i a nd Po i s sonM i x t ure Mode l s

McNeil, Embrechts, and Frey (2005, section 8.6.4) fit a probit-normalBernoulli mixture model to historical Standard and Poor’s default countdata from 1981 to 2000.36 They assume that any firm is one of the five rat-ing classes (A, BBB, BB, B, CCC) and that all firms within a ratings classhave the same probability of default, pr. The probability of default varies witha single common factor, f. The equivalence between Bernoulli mixture modelsand threshold models will be useful, so we write out the notation for both.

Probit-Normal Mixture Threshold

pr(f) ¼ F(mr þ sf)f �N(0,1)

Critical variable X ¼ fpr þ e

p(1 � r) as in (11.4)

f �N(0,1) and e �N(0,1)) X �N(0,1)default occurs in rating r whenX < drpr(f) ¼ F[(dr � f

pr)/

p(1 � r)]

In the mixture representation,

E½prðf Þ� ¼ p�r ¼Z

Fðmr þ szÞfðzÞdzfor each ratings class ðA;BBB;BB;B;CCCÞ

P½firm type r and firm type s default� ¼ Eðprðf Þpsðf ÞÞ¼Z

Fðmr þ szÞFðms þ szÞfðzÞdz

But the equivalence between the Bernoulli mixture and threshold formula-tions gives:

s ¼ pr=ð1� rÞ; r ¼ s2=ð1þ s2Þ mr ¼ dr=

pð1� rÞE½prðf Þ� ¼ p�r ¼ FðdrÞ ¼ Fðmr

pð1� rÞÞ

Since default occurs when X < dr and X � N(0,1), then the average proba-bility of default is P[X < dr] ¼ F(dr). It also is true that

R

F(mr þ sz)f(z)dz ¼F[mr/

p(1 þ s2)] ¼ F[mr

p(1 � r)].)

E½prðf Þpsðf Þ� ¼ P½joint normal rv ðwith correlation rÞ < dr and < ds�

The threshold formulation will generally be more useful for computa-tion while the Bernoulli mixture formulation is more useful for estimation

36Default data are reconstructed from published default rates in Brand and Bahr(2001, table 13, pp. 18–21).

Credit Risk 451

Page 471: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:26 Page 452

and simulation. The pairwise default correlation is from equation (11.2):

Default correlation ¼ E p�r ðf Þp�r ðf Þ� �� p�r p

�s

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

p�r � p�2r� �

p�s � p�2s� �

q ð11:25Þ

Table 11.7 summarizes the results, from McNeil, Frey, and Embrechts(2005, Table 8.8). These results provide a valuable resource for calibratingparameters of simple default models. Importantly, McNeil, Frey, andEmbrechts also fit a simple extension to the model that allows the varianceof the systematic factor, s2, (that is, the scaling applied to the common fac-tor f) to differ by rating category: pr(f) ¼ F(mr þ srf). This additional

TABLE 11.7 Parameter Estimates for Bernoulli Mixture Model—fromMcNeil,Frey, and Embrechts (2005, Table 8.8)

A BBB BB B CCC

Avg Prob Default,E[pr(f)]

0.00044 0.00227 0.00975 0.05027 0.20776

Critical value, dr �3.3290 �2.8370 �2.3360 �1.6420 �0.8140Mixture mean, mr �3.4260 �2.9200 �2.4040 �1.6900 �0.8380

Implied DefaultCorrelation

A 0.00040 0.00076 0.00130 0.00220 0.00304BBB 0.00076 0.00148 0.00255 0.00435 0.00609BB 0.00130 0.00255 0.00443 0.00762 0.01080B 0.00220 0.00435 0.00762 0.01329 0.01912CCC 0.00304 0.00609 0.01080 0.01912 0.02796

Note: This is based on the maximum likelihood parameter estimates for a one-factorBernoulli mixture model from McNeil, Frey, and Embrechts (2005, table 8.8). Theprobability of default for ratings class r (r¼ A, BBB, BB, B, or CCC) is given by pr(f)¼ F(mr þ sf) ¼ F[(dr � f

pr)/

p(1 � r)] with f�N(0,1). The average probability of

default is p�r ¼ F(dr) ¼ F(mr

p(1 � r)). The underlying data are annual default

counts from Standard and Poor’s for 1981 to 2000. The data in this table are slightlyadjusted from that shown in MFE table 8.8: I have estimated a significant digit be-yond that published in their table for the average probability of default E[pr(f)], themixture mean mr, and the scaling parameter s (0.243 versus 0.24) to more closelymatch the implied default correlations fromMcNeil, Frey, and Embrechts, table 8.8.The default correlations are calculated using Equation (11.23) and the critical valuesdr shown. Parts reproduced from Table 5.6 of A Practical Guide to Risk Manage-ment,# 2011 by the Research Foundation of CFA Institute.

Source: Based on McNeil, Frey, and Embrechts (2005, table 8.8).

452 QUANTITATIVE RISK MANAGEMENT

Page 472: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:27 Page 453

heterogeneity, however, does not provide substantial improvement to thefit, indicating that the simple model is adequate.

As mentioned earlier, under the Bernoulli mixture framework (commonfactor structure) the defaults for any uniform pool of firms or loans willvary from one period to the next as the common factor f varies. The defaultrate for firm type r conditional on f is F[(dr � f

pr)/

p(1 � r)]. The median

will beF[dr/p(1� r)] while the �2s default rates will be F[(dr � 2

pr)/

p(1

� r) ]. Table 11.8 shows the mean, median, and the �2s default rates im-plied by the estimates in Table 11.7.

The data in Table 11.8 show striking variability in default rates. Aver-aging across years the default probability for single-A rated firms is 0.044percent but roughly once every six or seven years the probability will bemore than 0.073 percent. A diversified portfolio will do nothing to protectagainst this risk, since all firms are responding to the same common factor.This highlights why credit risk is such a difficult issue: credit risks either alldo well (low default rates) or all badly (high default rates).

The cross-correlations shown in Table 11.7 are calculated from equation(11.25) and are critically dependent on the structure of the model. The eventof default is rare and simultaneous default is doubly rare.37 It is therefore dif-ficult to estimate cross-correlations directly from the data (nonparametrically),particularly for higher-rated issuers. The structure of the probit-normal, inparticular the functional form pr(f) ¼ F(mr þ sf), and the assumption of asingle common factor with homogenous scaling (same s applied to all ratings

TABLE 11.8 Variation in Default Rates Under Bernoulli Mixture FrameworkImplied by Estimates from Table 11.7

A BBB BB B CCC

Avg prob def 0.00044 0.00227 0.00975 0.05027 0.20776Prob def þ2 sig 0.00164 0.00747 0.02756 0.11429 0.36246Prob def þ1 sig 0.00073 0.00371 0.01535 0.07395 0.27596Prob def 0 sig (median) 0.00031 0.00175 0.00811 0.04551 0.20104Prob def �1 sig 0.00012 0.00078 0.00406 0.02662 0.13987Prob def �2 sig 0.00005 0.00033 0.00193 0.01478 0.09277

Note: In any period (conditional on a realization of F) the default process isBernoulli, producing binomially distributed default counts. Default rates across timevary, producing a mixture of Bernoullis and a skewed distribution.

37 For single-A issuers, there should be less than one default per year for a sample of1,000 issuers, and from table 8 of Gordy (2000), it appears the annual sample is onthe order of 500.

Credit Risk 453

Page 473: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:27 Page 454

categories) imposes the cross-correlation structure exhibited in Table 11.8.The size of the scaling factor s determines the level of the correlations. Theprimary feature in the data that will determine s will be the variance of thecount distribution relative to that for an unmixed Bernoulli distribution. Inother words, the variability of the annual counts or default rates, rather thandefault correlations across firms, will be the primary determinant of s. In thissense, CreditRiskþ’s focus on default rate variability is justified.

The model’s dependence on functional form to determine the correla-tion structure is both a strength and a weakness of the model. The strengthis that it provides a structure that produces cross-correlations in the face ofscarce data. The weakness is that the structure of the model is difficult totest given data limitations. Given the paucity of data, however, there isprobably little alternative. One must trust that the threshold model, withdefault determined by a critical variable crossing a threshold, is appropri-ate, and that modeling correlation across the underlying critical variablesappropriately captures the cross-correlation of defaults.

The data in Table 11.7 can also be used to calibrate a Poisson mixture(CreditRiskþ type model). For a single-factor model (with no constant termso that wr0 ¼ 0 and wr1 ¼ 1 and writing q instead of s2 for the gammavariance), the approximate default correlation, repeating equation (11.22) is:

Default correlation � ½p�r � p�s � q�=½pððp�r � p�2r Þ � ðp�s � p�2s ÞÞ� ð11:22Þ

There is, in fact, no single q that even comes close to reproducing thecorrelations in Table 11.7. The implied values are shown in Table 11.9, andthese vary by a factor of more than eight.

The Poisson mixture model with a single common factor

li ¼ ki � f with f � Gað1=s2; 1=s2Þ

TABLE 11.9 Gamma Variance Parameters for Poisson Mixture Model Implied byTable 11.8

A BBB BB B CCC

A 0.910 0.767 0.629 0.458 0.284BBB 0.767 0.651 0.539 0.396 0.249BB 0.629 0.539 0.450 0.334 0.212B 0.458 0.396 0.334 0.251 0.162CCC 0.284 0.249 0.212 0.162 0.107

Note: These are the gamma variance parameters q for a Poisson mixture model (asdetailed in Section 11.6, or equation (11.24) with wr0 ¼ 0, wr1 ¼ 1) that would beimplied by equating the approximate expression for default correlation in equation(11.22) to the default correlations given in Table 11.7.

454 QUANTITATIVE RISK MANAGEMENT

Page 474: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:28 Page 455

can be extended by introducing a constant term wr0:

lr ¼ kr � ðwr0 þwr1f 1Þwith f 1 � Gað1=q; 1=qÞ;wr0 þwr1 ¼ 1

) EðlrÞ ¼ kr; varðlrÞ ¼ k2r w2r1q ð11:26Þ

For a portfolio with n identical firms, this means l will be the sum ofa constant (n�kr�wr0) and a gamma-distributed random variable (meann�kr�wr1, variance n

2 � k2r �w2r1 � q, implying that it is �Ga(1/q, 1/n�kr�wr1�q)).

This will produce a random variable that is, in distribution, equal to thesum (convolution) of a Poisson (with parameter n�kr�wr0) and a negativebinomial (Nb(1/q, p) with p ¼ (1/n�kr�wr1�q)/(1 þ (1/n�kr�wr1�q)) ¼ 1/(1 þn�kr�wr1�q)) (See McNeil, Frey, and Embrechts 2005, 357.)

When we do this, normalizing by wA1 ¼ 1 (wA0 ¼ 0 and thus using q ¼0.9101), we get the weights shown in the first row of Table 11.10, and thecorrelations shown in the bottom of the table.38 These correlations matchthose shown in Table 11.7 quite well.

The benefit, and it is a substantial benefit, of formulating the model asa Poisson mixture in Table 11.10 rather than a Bernoulli mixture as in

38Gordy (2000) normalizes by setting q ¼ 1, but also investigates q ¼ 1.5 and q ¼4.0. I do not have any intuition for what is the appropriate choice and simply pickwA1 ¼ 1 for convenience.

TABLE 11.10 Weights and Default Correlations for Single-Factor Poisson MixtureModel

A BBB BB B CCC

Default Intensity (approxdefault prob)

0.00044 0.00227 0.00975 0.05027 0.20776

Weights wr1 1.00000 0.84600 0.70290 0.52530 0.34230Weights wr0 0.00000 0.15400 0.29710 0.47470 0.65770

Implied Default CorrelationA 0.00040 0.00077 0.00132 0.00230 0.00333BBB 0.00077 0.00148 0.00256 0.00444 0.00644BB 0.00132 0.00256 0.00443 0.00767 0.01112B 0.00230 0.00444 0.00767 0.01329 0.01928CCC 0.00333 0.00644 0.01112 0.01928 0.02796

Note: The second and third rows show the weights for the default intensity for aPoisson mixture or CreditRiskþ-type model (Equation 11.26). The value for thegamma mixing variance, q, is 0.9101.

Credit Risk 455

Page 475: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:28 Page 456

Table 11.7, is that the default distribution is simple to compute. The distri-bution will be negative binomial for the single-A, where w0 ¼ 0, and a con-volution of Poission and negative binomial for the others. The convolutionis computationally simple relative to the simulation required for calculatingthe Bernoulli mixture.39 Note, however, that while the convolution of thePoisson and negative binomial matches the correlation of the mixedBernoulli probit-normal, it does not always match the shape of the distribu-tion. The negative binomial and mixed probit-normal are essentially thesame when wr0 ¼ 0, wr1 ¼ 1, but differ when wr0 > 0.

Figure 11.16 shows both the Bernoulli probit-normal mixture and thePoisson mixture (convolution) for a portfolio of 10,000 BBB firms and 200CCC firms. Panel A shows that the Poisson mixture distribution for BBB,wherewr1 ¼ 0.8460, is not too far from the mixed Bernoulli. Panel B shows,however, that as wr1 falls (so the Poisson mixture becomes more weightedtoward a Poisson versus negative binomial), the shape of the Poisson mix-ture diverges from the probit-normal mixture.40 A pure negative binomial(with wr0 ¼ 0 and q ¼ 0.1066) does match the Bernoulli mixture—the purenegative binomial is not shown in Figure 11.16 Panel B because it is virtu-ally indistinguishable from the Bernoulli mixture.

There do not appear to be a single set of parameters for the Poisson mix-ture that simultaneously matches the correlation structure and also reprodu-ces the shape of the Bernoulli mixtures. There is, however, nothing sacredabout the shape of the Bernoulli probit-normal mixture. The tails of thedistributions cannot be fit well because of the paucity of data, so it would bedifficult to discriminate between the two on the basis of observations.

F ur t her Compar i s ons across Cred i t Mode l s

The MKMV and CreditMetrics models can be reduced to the same Ber-noulli mixture framework, and we have just seen that the Poisson mixtureused in CreditRiskþ can often be a close approximation to the Bernoulli

39 Both the Poisson and the negative binomial distributions are analytic, and the con-volution involves a simple looping over possible number of defaults. For example, tocalculate the probability of two defaults for the Poisson/negative binomial convolu-tion, we sum the following terms: P[Poiss ¼ 0]�P[NB ¼ 2] þ P[Poiss ¼ 1]�P[NB ¼ 1]þ P[Poiss ¼ 2]�P[NB ¼ 0]. The number of terms in the sum becomes larger asthe number of possible defaults becomes larger, but the number of calculations isorders-of-magnitude less than for simulating a Bernoulli mixture.40 The standard deviations of the distributions are close—BBB is 19.2 for theBernoulli mixture and 18.9 for the Poisson; CCC is 14.7 for the Bernoulli and 15.0for the Poisson mixture.

456 QUANTITATIVE RISK MANAGEMENT

Page 476: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:28 Page 457

A. 10,000 BBB Firms

BernoulliMixture—line

PoissonNeg Binom—dash

20 40 60 80 100

0.02

B. 200 CCC Firms

BernoulliMixture—line

PoissonNeg Binom—dash

20 40 60 80 100

0.02

FIGURE 11.16 Comparison of Shape of Bernoulli Probit-NormalMixture versus Poisson Mixture (Convolution of Poisson andNegative Binomial)Note: Panel A shows the distribution for a portfolio of 10,000 BBBfirms. The Bernoulli is a probit-normal mixture with m ¼ �3.426and s2 ¼ 0.2430 (probability of default 0.00227, critical level�2.837, critical variable correlation r ¼ 0.05576). The Poisson/negative binomial is the mixed Poisson with common factors givenby equation (11.24) with wr0 ¼ 0.1540, wr1 ¼ 0.8460, and q ¼0.9101 (convolution of Poisson with intensity 3.4958 and negativebinomial with alpha ¼ 1.09878, p ¼ 0.054119). Panel B shows thedistribution for a portfolio of 200 CCC firms. The Bernoulli is aprobit-normal mixture with m ¼ �0.838 and s2 ¼ 0.2430(probability of default 0.20776, critical level �0.814, and criticalvariable correlation r ¼ 0.05576). The Poisson/negative binomial isthe mixed Poisson with common factors given by equation (11.24)with wr0 ¼ 0.6577, wr1 ¼ 0.3423, and q ¼ 0.9101 (convolution ofPoisson with intensity 27.3302 and negative binomial with alpha ¼1.09878, p ¼ 0.071701).

Credit Risk 457

Page 477: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:28 Page 458

mixture distribution. It should therefore come as little surprise that, whenparameters are calibrated to be roughly the same, the models produceroughly the same results. Crouhy, Galai, and Mark (2000, ch. 11) gothrough the exercise of calibrating these three models (plus a fourth, Cred-itPortfolio View, which can also be formulated as a Bernoulli mixturemodel but with a logit-normal link function—see Table 11.7). They applythe models to a large diversified benchmark bond portfolio and find that‘‘the models produce similar estimates of value at risk’’ (p. 427).

Gordy (2000) compares CreditRiskþ and CreditMetrics (more accu-rately, a version of CreditMetrics that models default only, just as we haveimplicitly done). He shows the similarity of the mathematical structureunderlying the two models. He also compares the results for a variety ofsynthetic (but plausible) bank loan portfolios, and shows that the modelsare broadly similar.

11 .8 ACTUAR I A L V ERSUS EQU I VAL ENTMART INGAL E (R I SK -N EUTRAL ) PR I C I NG

The focus for credit risk so far has been on building the distribution of de-faults and losses. There has been little or no attention on pricing credit risksor using market prices to infer the distribution of credit losses because wehave assumed that market prices are not readily available. The focus hasbeen on building the distribution of defaults and losses from first principles,often using complicated models and limited data. We have, naturally, usedthe actual probability of defaults and losses, the probability we actually ob-serve and experience in the world—what we would call the physical proba-bility measure.

We are going to turn in the next section to market pricing of credit secu-rities, and what are termed dynamic reduced form models. In doing so, weneed to introduce a new concept, the equivalent martingale or risk-neutralprobability measure.

The distinction between physical and equivalent martingale probabilitymeasures can be somewhat subtle but in essence it is straightforward. Thephysical measure is the probability that we actually observe, what we expe-rience in the physical world. All the credit risk distributions we have beendiscussing so far have been using the physical measure (which we will callP), the probability we actually experience. The equivalent martingale orrisk-neutral measure (which we will call Q) arises in pricing market-tradedsecurities. It is an artificial probability measure, but one that is nonethelessincredibly useful for pricing securities.

458 QUANTITATIVE RISK MANAGEMENT

Page 478: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:28 Page 459

The natural question is: Why use anything other than the physical, real-world probabilities? The answer is that pricing securities using the physicalprobability measure is often difficult, while pricing with the equivalent mar-tingale measure reduces to the (relatively) simple exercise of taking anexpectation and discounting; for market-traded instruments, the risk-neutral approach is incredibly powerful.

Phys i ca l Measure and t he Ac t uar i a l A pproach t oCred i t R i sk Pr i c i n g

To see how and why pricing under the physical measure can be difficult, wewill go back and consider the simplest, stylized credit model outlined in Sec-tion 11.3—a portfolio of 1,000 loans that mature in one year and pay 6.5percent if not in default. The distribution of income is binomial and shownin Figure 11.3. The mean income is $59,350, which means the average in-come per loan, accounting for losses due to default, is 5.935 percent. In Sec-tion 11.3, we briefly outlined how a firm might set reserves for such aportfolio. But we can consider the problem from a different perspective:Given the default behavior, what should be the price? More specifically, in-stead of taking the 6.5 percent promised interest as given, what interest rateshould a firm charge? Is 6.5 percent high or low considering the risk that, onaverage, 10 loans out of 1,000 will default?

This seemingly straightforward question actually raises some deep and dif-ficult problems. Assume for now that these loans are not traded and so there isno market price available, so we must work without the benefit of reference tooutside prices. One standard approach is to set the interest rate at a spreadrelative to a default-free bond of the same maturity, with the spread set as:

Total spread ¼ Administrative costþ Expected lossþ Risk premium

This is referred to as an actuarial approach because the expression hasthe same structure as standard actuarial premium principles (see McNeil,Frey, and Embrechts 2005, section 9.3.4). The expected loss and risk pre-mium are the focus (administrative costs are not the prime interest here).The expected loss is generally straightforward. In our example, it is simple,just the product of the probability of default (0.01) times the expected lossgiven default (50 percent), giving 0.5 percent.

The risk premium is more difficult, as it depends fundamentally on riskpreferences. A common approach is to apply a hurdle rate (return on equity)to the economic capital held against the loan. Economic capital is deter-mined from the distribution of income, Figure 11.3, as discussed in

Credit Risk 459

Page 479: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:28 Page 460

Section 11.3. It will be the buffer to protect the firm against unexpectedlosses from the overall portfolio, a buffer intended to protect againstdefault and ensure some prespecified (low) probability of default. As such,economic capital will be a tail measure such as VaR or expected shortfall.

There is no correct choice of hurdle rate; it depends on risk preferencesand attitudes toward risk. Whose preferences? Maybe the firm’s manage-ment, maybe investors, but the answer is not trivial or obvious. For ourexample, let us choose a 20 percent return on equity. On our economiccapital of $7,300 (from Section 11.3), this gives an aggregate risk premiumof $1,460. As a percent of the portfolio investment ($1M), this is0.146 percent.

The economic capital, and thus the risk premium, is determined for theoverall portfolio, not on a security-by-security basis, and so must be allo-cated to individual securities.41 The risk premium allocation is itself non-trivial. In realistic portfolios, some loans may be highly correlated with theoverall portfolio and thus contribute substantially to the overall risk, requir-ing substantial capital and entailing a large risk premium. Others may beuncorrelated with the portfolio, contribute little to the overall risk, and thusrequire little capital and entail a low risk premium. The allocation may bedone using the analogue of the contribution to risk discussed in Chapter 10.McNeil, Frey, and Embrechts (2005, section 11.3) discuss various capitalallocation principles. For our example, where all loans are identical, thespread would be 0.646 percent.

Note the difficult and somewhat tricky steps to arrive at the loanspread:

& Calculate the expected loss for each loan (0.5 percent).& Calculate the economic capital for the overall portfolio ($7,300).& Calculate a firm-wide risk premium by applying a hurdle rate to the

economic capital (20 percent, $1,460, or 0.146 percent).& Allocate the overall risk premium back to each loan (0.646 percent).

Moving down, the steps become more complex with more subjectivecomponents.

These loans have now been priced in a reasonable manner, but theprocess is not trivial and partly subjective. For loans such as these, for whichthere are not comparable or reference market prices, such an approach maybe the best that can be done.

41 In our example, all loans are identical, so all loans contribute equally to the eco-nomic capital, but for realistic applications, this will not be the case.

460 QUANTITATIVE RISK MANAGEMENT

Page 480: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:29 Page 461

E qu i va l en t Mar t i nga l e or R i s k -Neu t ra l Pr i c i n g

For traded assets or securities, when market prices are available, the equiv-alent martingale or risk-neutral pricing approach is very powerful. The easi-est way to understand the difference is by an example.42 The Merton modelof 11.5 posits a firm funded by bonds and stock. The firm’s total assets areassumed to follow a log-normal process, (11.7a), which gives us the proba-bility of default (11.7b). The asset process specified in (11.7a) is the physicalprocess and the probability of default in (11.7b) is the physical or actualprobability of default. This is exactly what we wanted in Section 11.5, andwhat we used in Sections 11.5 through 11.7.

At no point, however, did we attempt to actually price the bond or theequity. We could have done so by taking expectations over the future pay-outs (the payouts given by equations (11.5) or (11.6)), using the true distri-bution (11.7a). The problem is that we would need to know investors’preferences—their attitude toward risk—to ascertain the relative value ofthe upside versus downside. This is not a trivial exercise, the equivalent of(but more difficult than) choosing the hurdle rate and the allocation of ag-gregate risk premium in the preceding example.

Under certain conditions, however, future cash flows can be valued bysimply taking the discounted expectation of those cash flows, but taking theexpectation over the artificial equivalent martingale probability measure Qrather than the true measure P.43 For the Merton model, it turns outthat the martingale measure Q simply requires replacing the mean or aver-age growth rate for the asset process by the risk-free rate. Instead of m inequation (11.7a), we substitute r:

asset process under physical measure P; log-normal with mean m:

~VT � N ln V0ð Þ þ m� s2=2� �

T;s2T� �

asset process under equivalent martingale measure Q; log-normal withmean r:

~VT � N ln V0ð Þ þ r� s2=2� �

T;s2T� �

42McNeil, Frey, and Embrechts (2005, section 9.3), have a nice alternative example.43The most important condition is that markets are complete in the sense that futurepayouts (say the payouts for the stock and bond in equations (11.5) and (11.6)) canbe replicated by trading in current assets. See McNeil, Frey, and Embrechts (2005,section 9.3); Duffie (2001); and Bingham and Kiesel (1998).

Credit Risk 461

Page 481: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:29 Page 462

The true default probability is given by (11.7b), reproduced here:

p ¼ PP~VT B� � ¼ P ln ~VT

� � ln Bð Þ� � ¼ Fln B=V0ð Þ � m� s2=2

� �

T

sffiffiffiffi

Tp

� �

while the default probability under the equivalent martingale measure isgiven by:

q ¼ PQ~VT B� � ¼ P ln ~VT

� � ln Bð Þ� � ¼ Fln B=V0ð Þ � r� s2=2

� �

T

sffiffiffiffi

Tp

� �

The difference between m and r will mean that the two probabilities are dif-ferent. For the Merton model, it is possible to express q in terms of p:

q ¼ F F�1 pð Þ þ m� r

s

ffiffiffiffi

Tp

Generally, q will be larger than p since usually m > r. (This expression isonly valid for the Merton model, although it is often applied in practice toconvert between physical and risk-neutral probabilities.)

The beauty of the equivalent martingale measure is that now the priceof the bond and stock can be calculated as simply the discounted expectedvalue of the future payout. For the stock, this is:

S0 ¼ EQ~ST

¼ EQ max ~VT � B; 0� �� � ¼

Z

max ~VT � B; 0� �� �

d ~VT

The asset value VT is log-normal and so the integral is, in fact, just theBlack-Scholes formula for a European call:

S0 ¼ CBSðt;V0; r;s;B;TÞ ¼ V0Fðd1Þ � Be�rTFðd2Þ ð11:27aÞ

d1 ¼ ln V0=Bð Þ þ rþ s2=2� �

T

sffiffiffiffi

Tp

� �

d2 ¼ d1 � spT

The bond will be the discounted value of the promised payment, Be�rT, lessa put:

B0 ¼ Be�rT � PBSðt;V0; r;s;B;TÞ ¼ Be�rT � ðBe�rTFð�d2Þ � V0Fð�d1ÞÞ¼ Be�rTFðd2Þ þ V0Fð�d1Þ

ð11:27bÞ

462 QUANTITATIVE RISK MANAGEMENT

Page 482: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:30 Page 463

The beauty of the equivalent martingale or risk-neutral approach is thesimplicity of the formulae (11.27). Using the risk-neutral measure, we canprice the securities as if investors were risk-neutral (and the mean were rrather than m). That is not to say that the true distribution has mean r, orthat investors are actually risk-neutral. Rather, when markets are completeso there are enough securities that we can replicate the payouts (11.5) and(11.6) through dynamic trading of existing securities, we get the right an-swer by simultaneously using the risk-neutral measure (mean r) and treatinginvestors as risk-neutral. The risk-neutral argument is a relative pricingargument—it works because we can dynamically hedge or replicate thepayouts.

The risk-neutral approach opens a whole arena of possibilities. We nowhave the price of the risky bond, equation (11.27b), as a function of therelevant underlying variables. For example, the term Be�rT is the value of arisk-free or default-free bond, and we can use (11.27b) to obtain the yieldspread between the risk-free and risky bond. (Note, however, that theMerton model is not ideal as a model of credit spreads, as it implies theshort-dated spread tends toward zero. See McNeil, Frey, and Embrechts[2005, section 8.2.2] and Crouhy, Galai, and Mark [2000, section 9.2]. Wewill encounter more useful models for risky bonds and credit spreadsshortly.) The term s is the volatility of the firm’s underlying assets, and wecan use (11.27b) to examine exactly how the risky bond price varies withasset volatility.

Pricing the risky bond has now become easy. The probability of defaultis no longer the true probability, but if our primary concern is the price ofthe risky security, we really don’t care.

Ac t uar i a l a nd R i sk -Neu t ra l Pr i c i ng Compared

McNeil, Frey, and Embrechts (2005, section 9.3.4) have an excellent sum-mary contrasting actuarial pricing (using the physical probability measure)with risk-neutral pricing:

Financial and actuarial pricing compared. We conclude this sectionwith a brief comparison of the two pricing methodologies. Thefinancial-pricing approach is a relative pricing theory, whichexplains prices of credit products in terms of observable prices ofother securities. If properly applied, it leads to arbitrage-free pricesof credit-risk securities, which are consistent with prices quoted inthe market. These features make the financial-pricing approach themethod of choice in an environment where credit risk is activelytraded and, in particular, for valuing credit instruments when the

Credit Risk 463

Page 483: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:30 Page 464

market for related products is relatively liquid. On the other hand,since financial-pricing models have to be calibrated to prices oftraded credit instruments, they are difficult to apply when we lacksufficient market information. Moreover, in such cases, pricesquoted using an ad hoc choice of some risk-neutral measure aremore or less ‘‘plucked out of thin air.’’

The actuarial pricing approach is an absolute pricing approach,based on the paradigm of risk bearing: a credit product such as aloan is taken on the balance sheet if the spread earned on the loanis deemed by the lender to be a sufficient compensation for the riskcontribution of the loan to the total risk of the lending portfolio.Moreover, the approach relies mainly on historical default informa-tion. Therefore, the actuarial approach is well suited to situationswhere the market for related credit instruments is relatively illiquid,such that little or no price information is available; loans to me-dium or small businesses are a prime case in point. On the otherhand, the approach does not necessarily lead to prices that are con-sistent (in the sense of absence of arbitrage) across products or thatare compatible with quoted market prices for credit instruments, soit is less suitable for a trading environment.

The authors also point out that as markets develop, more credit prod-ucts are priced using market prices and the risk neutral methodology. Thisraises issues of consistency and uniformity across an institution, with thepossibility that the same product may be priced differently by different unitsof a firm. Managing these issues requires a good understanding of the differ-ences between market-based (risk-neutral) valuation and actuarialvaluation.

The financial versus actuarial pricing distinction highlights an impor-tant dividing line for credit risk, maybe the most important for credit riskmeasurement. When a credit risk is traded, it makes sense to measure riskusing those market prices and the distribution of prices. One should onlyuse complex, default-based models when instruments are not traded, forexample, for loans, some corporate bonds, counterparty exposure on deriv-atives, and so on.

11 .9 DYNAM IC REDUCED FORM MODELS

We now turn to pricing credit-risky securities. The analysis of credit risk inthis chapter has focused on credit risk management—measuring and usingthe P&L distribution for a portfolio or business activity over some (usually

464 QUANTITATIVE RISK MANAGEMENT

Page 484: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:30 Page 465

long) period. In this section, we change gears to focus on market pricing ofcredit-risky securities. We will see that these models apply to credit riskwhen such risk can be traded. As such, it moves away from the tools andtechniques we have discussed in this chapter and moves more toward thearena of market risk that we discussed in earlier chapters.

The goal of this section is to introduce the idea, not to provide a compre-hensive overview. The pricing of credit-risky securities is a large and growingarea. Duffie and Singleton (2003) wrote a textbook devoted to the topic.McNeil, Frey, and Embrechts (2005) devote chapter 9 of their book to thetopic. This section will do no more than provide the briefest introduction.

There have been two important changes in the markets for credit-riskysecurities over recent years. First, an increasing variety and volume of creditrisks are being actively traded. Thirty years ago few credit-risky securitiesbeyond corporate bonds were traded, and many bonds were only thinlytraded. Loans, receivables, leases, all were held to maturity by institutionsand virtually never traded. Now there is a wealth of derivative securities(credit default swaps prime among them), collatoralized structures, andloans that are traded. There have been huge transformations in the markets.

The second change has been in the pricing of credit risks. The develop-ment of the risk-neutral or equivalent martingale paradigm for pricingcredit-risky securities has allowed investors to value credit risks, separatefrom other components such as interest rates. The breaking out of a secur-ity’s component parts has made the pricing of credit more transparent, andhas been a major factor facilitating the increase in trading of credit risks.

The growth of markets in credit risk has seen disruptions, most spectac-ularly during the 2007–2009 financial crisis that was related to the securi-tized mortgage markets. Such credit-related disruptions should not beblamed entirely on innovations and changes in the credit markets, however.Financial markets have managed to go through crises for ages, many credit-related and well before modern derivative securities. Barings Brothers wentbust (the first time, in 1890) from over-exposure to Argentine bonds (partic-ularly the Buenos Ayres [sic] Drainage and Waterworks Company—seeKindleberger (1989, 132) and Wechsberg (1967, ch. 3)). Roughly 1,400 U.S. savings and loans and 1,300 banks went out of business from 1988 to1991 because of poor lending practices and particularly overexposure toreal estate. (See Laeven and Valencia 2008 and Reinhart and Rogoff 2009,appendix a.4.)

Cred i t De f a u l t Swaps and R i s ky Bonds

I will explain the idea of dynamic reduced form models by developing asimple version of a model for pricing a single-name credit default swap

Credit Risk 465

Page 485: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:31 Page 466

(CDS). Although quite simple, this model gives the flavor of how suchmodels work.

Outline for CDS A CDS is the most basic credit derivative, one that formsthe basis for various securities and is in many ways the easiest credit-riskysecurity to model. (A more detailed discussion can be found in Coleman[2009]. See also McNeil, Frey, and Embrechts [2005], section 9.3.3.) Al-though CDS are often portrayed as complex, mysterious, even malevolent,they are really no more complex or mysterious than a corporate bond.

We discussed CDS in Chapter 3 where we showed how a standard CDSis equivalent to a floating-rate corporate bond (a floating rate note, or FRN)bought or sold on margin. We will cover some of the same material beforewe turn to the mathematics of pricing.

First, to see why a CDS is equivalent to a floating rate bond (FRN), con-sider Figure 11.17, which shows the CDS cash flows over time for a firmthat sells protection. Selling protection involves receiving periodic paymentsin return for the promise to pay out upon default. The firm receives premi-ums until the maturity of the CDS or default, whichever occurs first. Sincethe premiums are paid only if there is no default, they are risky. If there is adefault, the firm pays 100 – recovery (pays the principal on the bond lessany amount recovered from the bond).

Nowwe can use an elegant trick. With any swap agreement, only net cashflows are exchanged. This means we can insert any arbitrary cash flows we

Repayment of Loss upon Default= 100 – Recovery

Risky Premiums= C if No Default

FIGURE 11.17 Timeline of CDS Payments(Sell Protection)Reproduced from Figure 3.2 of A PracticalGuide to Risk Management,# 2011 by theResearch Foundation of CFA Institute.

466 QUANTITATIVE RISK MANAGEMENT

Page 486: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:31 Page 467

wish, so long as the same amount is paid and received and the net is zero. Letus add and subtract LIBOR payments at each premium date, and also 100 atCDS maturity, but only when there is no default. These LIBOR payments arethus risky. But since they net to zero, they have absolutely no impact on theprice or risk of the CDS. In Figure 11.18, Panel A shows the original CDSplus these net-zero cash flows. Panel B then rearranges these cash flows in aconvenient manner.

The left of Panel B is exactly a floating rate bond (FRN). If no defaultoccurs, then the firm selling protection receives coupon of (LIBOR þ spread)and final principal at maturity. If default occurs, the firm receives the couponup to default and then recovery. The combination in the right of Panel Blooks awkward but is actually very simple: it is always worth 100 today. It isa LIBOR floating bond with maturity equal to the date of default or maturityof the CDS: payments are LIBOR þ 100 whether there is a default or not,with the date of the 100 payment being determined by date of default (orCDS maturity). The timing of the payments may be uncertain, but that doesnot affect the price because any bond that pays LIBOR þ 100, when dis-counted at LIBOR (as is done for CDS), is worth 100 irrespective of maturity.

In other words, we have just proven, rather simply and without anycomplex mathematics, that a CDS (sell protection) is just a combination oflong an FRN and short a LIBOR floater (worth $100):

CDSðsell protectionÞ , þFRN� LIBOR floater ¼ þ FRN� 100

By reversing the signs, we also have

CDSðbuy protectionÞ , �FRNþ LIBOR floater ¼ �FRNþ 100

This is extraordinarily useful because it tells us virtually everything wewant to know about the broad how and why of a CDS.44

Pricing Model for CDS We can now turn to pricing the CDS. A model forvaluing a CDS is relatively straightforward. The cash flows for a CDS(sell protection) are:

& Receive& Fixed coupon c as long as there is no default.

& Pay& $100 less any recovery when (and if) default occurs.

44 The equivalence is not exact when we consider FRNs that actually trade in themarket, because of technical issues regarding payment of accrued interest upon de-fault. See Coleman (2009).

Credit Risk 467

Page 487: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:31 Page 468

Both sets of cash flows are risky in the sense that how long and whetherthey are paid depend on whether default occurs, and when, exactly, thatdefault occurs.

These cash flows are as shown in Figure 11.17. If default were knownto occur at a fixed time t then valuation would be quite simple: Discount thefixed cash flows (receive c until t, then pay 100 – recovery) using the equiv-alent martingale measure. The problem is that the time t is random and notknown. So we assume a distribution for the random default time, t, anddiscount back, again using the equivalent martingale measure.

This is a reduced form model in the sense that the process governingdefault (the random time t) is assumed rather than default being modeledas a result of underlying financial or economic processes. It is dynamic in

Repayment ofLoss upon Default= 100 – Recovery

Risky Premiums= C if No Default

+

A. CDS (sell protection) + Net Zero Cash Flows

Risky LIBOR Payments= L if No Default

Risky Principal= 100 if No Default

Risky FRN Payments= C + L if No Default

+

B. FRN + Floater of Indeterminate Maturity

Risky Principal= 100 if No Default

100 uponDefault

Risky LIBOR Payments= L if No Default

Risky Principal= 100 if No Default

Recovery upon Default

FIGURE 11.18 CDS Payments plus Offsetting Payments ¼ FRN – LIBOR floaterReproduced from Figure 3.3 of A Practical Guide to Risk Management,# 2011 bythe Research Foundation of CFA Institute.

468 QUANTITATIVE RISK MANAGEMENT

Page 488: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:32 Page 469

the sense that the default time is modeled as a stochastic process in continu-ous time. The benefit of the reduced form approach is the substantial flexi-bility in the stochastic process governing default, and the simplicity of therelative pricing (risk-free or equivalent martingale) framework.

For this example, we assume that the random default time t is a constant-hazard process. This will make the mathematics particularly simple. Theconstant hazard assumption means the probability of default in the nextinstant of time, conditional on not yet having defaulted, is constant and doesnot change over time. In other words, under the risk-neutral measure, thedefault time t is exponentially distributed with constant hazard a:

Pðt < t þ dtjt > tÞ ¼ a dt Pðt > tjt > 0Þ ¼ expð�atÞ

If we assume that the risk-free rate is constant at r, then the presentvalue of receiving the coupons c up to the random time t is:

PVðreceive coupons c at times tkÞ ¼X

kexpð�rtkÞ � c � PQðt > tkÞ

¼X

kexpð�rtkÞ � c � expð�atkÞ

This assumes that coupons occur annually. If not, then we would havec�df, where df ¼ day fraction ¼ (days between payment)/360 or /365 de-pending on the currency and appropriate money market convention.

The PV of paying the loss upon default is the expectation of the loss (netof recovery) over the random default time. Say the loss is 100 and the recov-ery rate is fixed at d. Then the loss net of recovery is 100(1 � d) and theexpected value is:

PVðlossÞ ¼ �100 � ð1� dÞ � R a � e�rs � e�asds

¼ �100 � ð1� dÞ � ða=ðaþ rÞÞ � ½1� expð�ðrþ aÞTÞ�

The total value of the CDS is

PV of CDS ðSell Protection : Rec premium c; Pay payback of bond lossÞ¼ PVðreceive couponsÞ � PVðlossÞ¼Pk expð�rtkÞ � c � df � expð�atkÞ � 100 � ð1� dÞ � ða=ðaþ rÞÞ

� ½1� expð�ðrþ aÞTÞ�

¼X

c � df � e�tðrþaÞ � 100 � ð1� dÞ � 1� e�ðrþaÞT a

aþ rð11:28Þ

Credit Risk 469

Page 489: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:32 Page 470

Where df ¼ day fraction (for example, � 92.5/360 for quarterly USD,A/360)

This is a very simple formula. In fact, one that can be evaluated in aspreadsheet without difficulty. This assumes that when default occursbetween coupon payment dates, no partial coupon is paid.45

Pricing Model for Risky Bond The real power of this approach, however, isthat it puts a pure credit derivative such as this CDS in the same frameworkas a more traditional corporate bond. Figure 11.19 shows the cash flows fora traditional bond that is subject to default: coupons at periodic times, pay-ment of recovery upon default, and payment of principal if no default.These are not exactly the same cash flows as shown in Figure 11.17 (al-though close) but whether exactly the same or not, we can value them usingthe same framework.

We again assume that the risk-free rate is constant at r so that the pres-ent value of receiving the coupons c is, again:

PVðreceive coupons c at times tkÞ ¼X

kexpð�rtkÞ � c � PQðt > tkÞ

¼X

kexpð�rtkÞ � c � expð�atkÞ

45 In fact, CDS traded in the market often involve partial payment of coupons—seeColeman (2009).

Recovery upon Default

Risky Premiums= C if No Default

Payment of Principal = 100 if no Default

FIGURE 11.19 Timeline of Payments for RiskyBond

470 QUANTITATIVE RISK MANAGEMENT

Page 490: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:32 Page 471

The PV of recovery upon default is the expectation over the random defaulttime of the recovery amount, 100�d:

PVðrecoveryÞ ¼ 100 � d � R a � e�rs � e�asds

¼ 100 � d � ða=ðaþ rÞÞ � ½1� expð�ðrþ aÞTÞ�

The PV of the principal is 100 times the probability default occurs after T,discounted at r:

PVðprincipalÞ ¼ 100 � P½default after T� ¼ 100 � expð�ðrþ aÞTÞ

The total value of the bond is

PV of bond ¼ PVðreceive couponsÞ þ PVðprincipalÞ þ PVðrecoveryÞ¼ P

k expð�rtkÞ � c � df � expð�atkÞ þ 100 � expð�ðrþ aÞTÞþ100 � d � ða=ðaþ rÞÞ � ½1� expð�ðrþ aÞTÞ�

¼X

c � df � e�tðrþaÞ þ 100 � e�ðrþaÞT þ 100 � d � 1� e�ðrþaÞT a

aþ r

ð11:29Þ

Where df ¼ day fraction (for example, � 92.5/360 for quarterly USD,A/360)

This is a very simple formula. In fact, one that can be evaluated in aspreadsheet without difficulty.

Equation (11.28) gives the CDS and (11.29) the bond as functions ofthe underlying parameters. The underlying parameters are:

r ¼ risk-free rate

a ¼ default intensity

d ¼ recovery rate

Example—Applying Market Pricing to CDS and Bond Both the CDS and thebond depend on the same parameters, the same underlying process. Thismeans that if we can value one instrument, we can automatically value theother. (The coupon and maturity date are characteristics of the particularinstrument.) The risk-free rate r depends on wider market conditions, butthe default intensity a and the recovery rate d are specific to the particularfirm, the particular issuer that we are looking at.

Credit Risk 471

Page 491: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:33 Page 472

Corporate bonds are traded in the market and so we can get a marketprice for the PV. Consider a five-year bond with annual coupon 5 percentwhen the risk-free rate is 3.50 percent. If the bond is trading at par ($100)then we can use equation (11.29) to calculate values of parameters a and dthat would be consistent with this market price.46 If we calculate a, assum-ing d ¼ 40 percent, then we arrive at a ¼ 2.360 percent.

Now let us turn to a CDS, say, a five-year CDS on the same issuer withannual coupon 1 percent. Equation (11.28) gives us the value of the CDS(receiving fixed coupon, paying out upon default), which in this case turnsout to be �$1.8727.

The beauty and power of what we have done is to take a theoreticalframework (the dynamic reduced form model that discounts uncertain cashflows under an equivalent martingale measure) and apply it to two differentbut related instruments (the CDS and the risky bond on the same issuer). Bytreating both instruments using the same pricing framework, we can takethe market prices from the bond and apply this market pricing to the CDS.Using equation (11.29), we have separated out and separately priced thepure discounting (due to the risk-free rate r) and the risky discounting (dueto the default and recovery parameters a and d). We can then apply these toa related but different set of cash flows, the cash flows for the CDS.

What we have done is to convert a nontraded credit security, the CDS,into a market-priced security. Essentially, we have used a relative pricingparadigm to move the CDS into the market pricing and market risk cate-gory. In this sense, dynamic reduced form credit models should be thoughtof in the same category as pricing models for other traded instruments suchas models for pricing swaps or options. They take market risk factors andtranslate to the P&L for the particular securities held by the firm. They ap-ply to Step 1 (‘‘Asset to Risk Factor Mapping’’) of the process for generatingthe P&L distribution discussed in Section 8.3 of Chapter 8. Although theinstruments are credit sensitive, they do not require the techniques discussedin this chapter.

11 .10 CONCLUS I ON

As I said early on, this chapter does not take a standard approach to discuss-ing credit risk. I have focused heavily on the mathematics and the modelingrequired to build the P&L distribution, much less on the traditional tech-niques of credit measurement and management. I think this approach is

46 In fact, it is not possible to separate a and d. The standard practice is to fix d, sayat 30 percent or 40 percent, and then calculate a conditional on the value of d.

472 QUANTITATIVE RISK MANAGEMENT

Page 492: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:33 Page 473

justified on two grounds. First, the modeling required to build the P&L dis-tribution for nontraded credit risks is simple in concept but difficult in prac-tice. I have tried to lay out the conceptual framework and highlight thesimplicity of the concepts while also stressing the difficulties and subtletiesof building and implementing a practical credit risk system. Second, thereare many texts that do a good job of discussing the more traditionalapproaches to credit risk. Readers can remedy any omissions without unduedifficulty.

I do want to highlight, however, the wide range of credit risk topics notcovered.

In general, credit risk management is composed of three components:

1. Measurement.2. Setting reserves, provisions, and economic capital.3. Other management areas: setting limits, portfolio management, manag-

ing people and incentives.

The primary focus of this chapter has been on determining the distribu-tion for defaults, which is only the first component of measuring credit risk.Measurement means determining the profit and loss (P&L) distribution.The loss itself depends on default, exposure, and recovery:

Loss ¼ Default � Exposure � ð1� RecoveryÞ

Defaults have taken center stage because default modeling is the mostcomplex component of credit risk models, and models differ primarily intheir modeling of defaults and the process underlying defaults, not theirmodeling of exposures and recovery.

Measuremen t : Trad i t i o na l Cred i tAna l y s i s and Ra t i n gs

Traditional credit analysis is devoted to analyzing individual firms, loans,and other credit risks with the goal of assessing the likelihood of defaultand how costly it would be were it to occur. It usually takes the form ofassigning a credit rating to a credit risk. The credit rating may reflect onlythe likelihood of default or a combination of the probability of default andthe severity of loss. In this sense, traditional credit ratings map to the defaultprobabilities of the more formal models discussed in this chapter, or a com-bination of probability and loss given default. In some cases, the mapping isexplicit, as in CreditMetrics, where a firm’s ratings category determines thetransition (and default) probability, and in the default probability estimates

Credit Risk 473

Page 493: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:33 Page 474

by rating category fromMcNeil, Frey, and Embrechts (2005), discussed ear-lier in Section 11.7.

Most rating systems are based on both quantitative and qualitative con-siderations, but usually not formal models of the type discussed in this chap-ter. Traditional credit analysis generally focuses on individual names andnot portfolio interactions directly, and thus could be termed single-namecredit analysis.

In practice, there are a huge variety of methods and an extensive litera-ture devoted to single-name credit analysis and ratings systems. There are anumber of ratings agencies that rate publicly traded issues, with Standardand Poor’s, Moody’s, and Fitch being the most well known. Private sectorissuers pay ratings agencies to rate a bond issue, and the ratings agenciesthen make the ratings available to the public. The ratings are relied on bymany investors and regulators. Almost all public issues in the United Statesare rated by one or more of the ratings agencies, and many internationalissues and issuers (including sovereign issuers such as the United States orthe Greek government) are also rated.

Many issues and issuers that a bank is exposed to will not have publicratings, and so financial institutions often develop their own internal ratingsto supplement the publicly available ratings. Crouhy, Galai, andMark (2000)devote a full chapter (chapter 7) to both public and internal credit rating sys-tems while Crouhy, Galai, and Mark (2006) split the topic into two chapters,one covering retail credit analysis and the other commercial credit analysis.

Measuremen t : E xposure and Recovery—Types o fCred i t S t ruc t ures

Exposure and recovery are critical to measuring credit losses but have notbeen covered extensively in this chapter. Exposure refers to the amountthat can potentially be lost if default were to occur, and recovery to theamount (or proportion) of the potential loss that is recovered. They com-bine to give the loss given default (LGD):

Loss Given Default ¼ Exposure ð$ amountÞ� ½1� Recoveryðpercent recoveryÞ�

The current exposure is often itself difficult to measure. For example,simply collecting data on current exposures can be challenging (as men-tioned in Section 11.1). The problem becomes even more difficult, however,because what matters is the exposure at the time of default, not the currentexposure. Since default is in the future and itself uncertain, exposure atdefault can be doubly difficult to measure.

474 QUANTITATIVE RISK MANAGEMENT

Page 494: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:34 Page 475

There is wide variation in the types of exposure. Marrison (2002,ch. 17) discusses various credit structures:

& Credit exposures to large corporations& Commercial loans& Commercial credit lines& Letters of credit and guarantees& Leases& Credit derivatives

& Credit exposures to retail customers& Personal loans& Credit cards& Car loans& Leases and hire-purchase agreements& Mortgages& Home-equity lines of credit

& Credit exposures in trading operations& Bonds& Asset-backed securities (embodying underlying exposures to corpo-

rations or retail customers from things such as loans, leases, creditcards, mortgages, and so on)

& Securities lending and repos& Margin accounts& Credit exposures for derivatives (noncredit derivatives such as inter-

est rate swaps)& Credit derivatives& Trading settlement

For many instruments, exposure will vary over time and with changesin markets. Consider an amortizing corporate bond with five-year final ma-turity. Because of amortization, the notional value of the bond will go downover time in a predictable manner. For any notional, however, the value ofthe bond (and thus the exposure or amount at risk of loss) will vary with thelevel of market risk-free interest rates: lower interest rates mean lower dis-counting and higher present value. A common way to represent this is bymeasuring the expected exposure and the maximum likely exposure (MLE).For the bond, whose value depends on interest rates, the expected exposurecould be taken as the value implied by the forward curve (or possibly thenotional). The MLE could be taken as the exposure at the 95th percentile ofthe interest rate distribution. The situation for an amortizing bond might beas shown in Figure 11.20, Panel A.

Credit Risk 475

Page 495: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:34 Page 476

For an interest rate swap, and other derivatives such as options, thecredit exposure will be more complicated. The present value for a new at-market swap will be zero and so there is no credit exposure—if the counter-party defaulted and walked away, there would be no loss in market value.Over time and as interest rates change, however, the market value of theswap may become positive or negative. If negative, then again, there is nocredit exposure—if the counterparty walked away, there would be no lossin market value. When the market value is positive, however, the creditexposure will equal the market value—if the counterparty disappeared, theloss would be equal to the market value of the swap.

The exposure for an interest rate swap will start out at zero but maythen become positive, or remain at zero. The exposure will be random overtime, moving between zero and some positive value. It is still possible,

Exposure ($)

A. Amortizing Bond

Bond Time

Bond Face Value(and expectedexposure)

Maximum LikelyExposure

Exposure ($)

B. Two Interest Rate Swaps

Time

ExpectedExposure

Maximum LikelyExposure

Exposure ($)

TimeExpected Exposure

Maximum LikelyExposure

FIGURE 11.20 Expected and Maximum Likely Exposure for Amortizing Bond andTwo Interest Rate SwapsReproduced from Figure 5.21 of A Practical Guide to Risk Management,# 2011 bythe Research Foundation of CFA Institute.

476 QUANTITATIVE RISK MANAGEMENT

Page 496: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:34 Page 477

however, to calculate the expected and the maximum likely exposures. Theexpected exposure could simply be taken as the value of the swap traced outalong the forward curve. This might be either positive (shown in the left ofPanel B of Figure 11.20) or negative (the right of Panel B of Figure 11.20—note that the exposure will actually have discrete jumps on coupondates but these are not shown in the figures). The maximum likely exposurecould be taken as the 95th percentile of the forward curve distribution.This would be positive for virtually any swap, as shown in Panel B ofFigure 11.20.

Marrison (2002, ch. 17) discusses the concept of maximum likely expo-sure more extensively, and has useful diagrams for many credit structures.

The expected or the maximum likely exposure could be used with thestylized default model discussed in Section 11.3 to produce a distribution oflosses. Indeed, commercial products often do something akin to this. (Cred-itMetrics uses something close to the expected credit exposure. MKMV hasthe option to use market prices [forward prices] to calculate exposures, andthis gives roughly the expected exposure.)

Using the expected and maximum likely exposure, however, is only aninexact approximation. In reality, the exposure at default will generally berandom. Considering an interest rate swap again, the actual exposure maybe zero or positive, and will change as default-free interest rates change ran-domly over time. Combining random default processes with random varia-tion in underlying market variables is difficult and not commonly done.47

This is a major issue to be addressed in future credit risk model develop-ment. The problem is particularly important for instruments such as interestrate swaps in which the exposure changes substantially with market varia-bles (interest rates for swaps). The issue will be less important for instru-ments such as short-dated loans, in which the exposure is primarily due toprincipal at risk.

Reserves , Prov i s i ons , a nd Econom i c Cap i t a l

Once the distribution of defaults and losses (the P&L distribution) has beenmeasured, it can be used. The first place it can be used is in the determina-tion of reserves, provisions, and economic capital. This was discussed inbriefly Section 11.3. In fact, the topic deserves a deeper discussion, but italso should be integrated with overall firm risk, not limited to credit riskalone.

47 Crouhy, Galai, and Mark (2000) emphasize this more than once—see pp. 343,411.

Credit Risk 477

Page 497: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:34 Page 478

O t her Cred i t R i s k Managemen t Top i cs

Beyond the specific issues of reserves and economic capital, there are thewider issues of risk management—how to use the information on risk tomanage the business. Issues such as setting limits, capital allocation, manag-ing people, setting compensation, and other incentives are not specific tocredit risk. It would be a mistake to discuss such issues in the context ofcredit risk alone.

Cred i t M i t i g a t i o n

There is a large area of credit enhancement, mitigation, and hedging tech-niques. These range from traditional techniques such as bond insurance andmark-to-market to recent innovations such as credit default swaps. Crouhy,Galai, and Mark (2000) devote chapter 12 of their book to the topic;Crouhy, Galai, and Mark (2005) also cover it in chapter 12 of that book.

In the end, credit risk is a huge task with many components. ErnestPatakis is indeed correct to say that one of the most dangerous activities ofbanking is lending. This chapter has introduced many of the topics but thistreatment cannot be taken as definitive.

APPEND IX 11 .1 : PROBAB I L I TY D I STR I BUT I ONS

B i nom i a l

The binomial distribution counts the number of successes in a sequence ofindependent yes/no or succeed/fail (Bernoulli) trials. With p ¼ probabilityof success, q ¼ 1 � p ¼ probability of failure, the probability of k successesout of n trials is:

Binomial ¼ probability of k successes in n trials ¼ nk

� �

pkð1� pÞn�k

where n

k

� �

¼ n!

k!ðn� kÞ! is the binomial coefficient

Mean number of successes ¼ np

Variance ¼ npð1� pÞMode ¼ intðpðnþ 1ÞÞ

For q ¼ 0.01, n ¼ 100, P[k ¼ 0] ¼ 0.366, P[k ¼ 1] ¼ 0.370, P[k ¼ 2] ¼0.185, P[k 3] ¼ 0.079

478 QUANTITATIVE RISK MANAGEMENT

Page 498: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:34 Page 479

Po i sson

The Poisson distribution gives the probability of observing j events during afixed time period, when events occur at a fixed rate per unit of time andindependently over time. If the intensity (or average rate per unit of time) isl, then the probability that j events occur is:

exp �lð Þl�j

j!

Mean ¼ Variance ¼ l

Gamma

A gamma random variable is a positive random variable with density

f ðx;a;bÞ ¼ ba

G ðaÞ xa�1expð�bxÞ

Mean ¼ a=b

Variance ¼ a=b2

Skewness ¼ 2=pa

Nega t i v e B i nom i a l

The negative binomial is a discrete distribution (like the binomial takingvalues 0, 1, 2, . . . ). The initial definition arises, like the binomial, whenconsidering Bernoulli trials each of which may be either a success (probabil-ity p) or failure (probability 1 � p). Unlike the binomial (within which weconsider a fixed number of trials), for the negative binomial, we keep count-ing until there have been r successes. Then the probability of k failures (be-fore r successes) is:

negative binomial ¼ probability of k failures before r successes

¼ rþ k� 1

k

!

prð1� pÞk

where

rþ k� 1k

� �

¼ ðrþ k� 1Þ!k!ðr� 1Þ! is the binomial coefficient

Credit Risk 479

Page 499: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C11 03/01/2012 12:16:35 Page 480

The definition in various places can differ:

& It may be stated in terms of k successes before r failures& It may be stated in terms of the total number of trials (k þ r) before a

fixed number of successes or failures& The binomial coefficient may be expressed as

rþ k� 1r� 1

� �

instead of

rþ k� 1k

� �

(examination of the definition of the binomial coefficient

will show that these two expression are in fact identical)

For our purposes, however, we use an extended version of the negativebinomial, sometimes called the Polya distribution, for which r, which wewill now call a, is real-valued. (For the original negative binomial, r mustbe an integer > 0.)

This definition of the negative binomial is essentially the same:

Negative binomial ¼ probability of count k; given parameter a > 0

¼ aþ k� 1

k

� �

pað1� pÞk

except that the coefficient is the extended binomial coefficient defined by:

aþ k� 1

k

!

¼ ðaþ k� 1Þ � ðaþ k� 2Þ � � � ðaak!

¼ G ðaþ kÞk! � G ðaÞ k > 0

aþ k� 1

0

!

� 1

480 QUANTITATIVE RISK MANAGEMENT

Page 500: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C12 02/15/2012 10:34:49 Page 481

CHAPTER 12Liquidity and Operational Risk

L iquidity and operational risk are extremely important, but in somerespects more difficult to analyze and understand than market risk or

credit risk. For one thing, they are both hard to conceptualize and difficultto quantify and measure. This is no excuse to give them short shrift but itdoes mean that the quantitative tools for liquidity and operational risk arenot as developed as for market risk and credit risk. This also means thatjudgment and experience count—it reinforces the idea that risk manage-ment is management first and foremost.

I cover liquidity and operational risk in less depth than market andcredit largely because they are at an earlier stage of development, and notbecause they are any less important. In fact, both are critically important.Issues around liquidity risk come to the fore during periods such as the crisisof 2007–2009. The events during that period reflected the bursting of anasset bubble, but the events were combined with, or more correctly gener-ated, a consequent liquidity crisis.1

12 .1 L I QU I D I TY R I SK—ASSET V ERSUSFUND ING L I QU I D I TY

When we turn to liquidity risk, we find that there are actually two quitedistinct concepts. First there is asset liquidity risk (also known as market orproduct liquidity). This ‘‘arises when a forced liquidation of assets createsunfavorable price movements’’ (Jorion 2007, 333). The second is fundingliquidity risk (also known as cash-flow liquidity). This ‘‘arises when financ-ing cannot be maintained owing to creditor or investor demands’’ (Jorion

1 The combination of a bursting asset bubble and a liquidity crisis has been quitecommon over history—for the United States, think about 1873, 1893, 1907–1908,and 1929–1933 (the Great Depression).

481

Page 501: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C12 02/15/2012 10:34:50 Page 482

2007, 333); funding liquidity risk can also be thought of as a maturity mis-match between assets and liabilities.

Although asset and funding liquidity go by the same name, they are fun-damentally different, and it is truly unfortunate that they are both calledliquidity. They are related in the sense that when funding liquidity becomesan issue, then asset liquidity is invariably important. But this is no differentfrom, say, market risk and asset liquidity risk; when there are big marketmovements we may need to rebalance the portfolio, and then asset liquiditybecomes important and possibly contributes to further market losses.

Although going by the same name, the sources of asset and funding li-quidity risks, the methods of analysis, the responses and management of thetwo, are so different that it I think it is more fruitful to treat them as dis-tinct. At the end, we can return to examine the connections between them.In fact, these connections will be easier to understand after we have treatedthem as separate and distinct.

For both asset and funding liquidity risk, we need to examine some ofthe institutional and operational details of the portfolio and the firm. AsJorion (2007) says: ‘‘Understanding liquidity risk requires knowledge ofseveral different fields, including market microstructure, which is the studyof market-clearing mechanisms; optimal trade execution, which is the de-sign of strategies to minimize trading cost or to meet some other objectivefunction; and asset liability management, which attempts to match the val-ues of assets and liabilities on balance sheets’’ (p. 335).

Before discussing asset and funding liquidity on their own, we need tothink about a fundamental point: ‘‘What questions are we asking?’’ In ear-lier chapters, I have emphasized the P&L distribution—stressed that meas-uring and understanding risk means measuring and understanding the P&Ldistribution. This is still true for liquidity risk but we have to examine ourassumptions, think a little more about what we are looking at and why.

In earlier chapters, we implicitly assumed that we are interested in theday-by-day P&L for the ongoing business (or week by week, or whatever).This is appropriate and correct. Consider our sample portfolio where wehold $20 million of the U.S. 10-year Treasury and D7 million of futures onthe CAC equity index. In calculating the value today, in examining the his-tory of market yields and prices, in estimating potential future P&L, we arealways considering this as an ongoing business, a continuing and relativelystable portfolio. We are holding the portfolio for a period, not liquidating itevery day and reconstituting it the next morning. Using midmarket prices,ignoring bid-offer spreads and the market impact of selling a position havebeen reasonable approximations. That does not mean such considerationsare unimportant, simply that they have not been the primary focus of ourattention. The questions we have been asking (even if we have not been

482 QUANTITATIVE RISK MANAGEMENT

Page 502: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C12 02/15/2012 10:34:50 Page 483

explicit about this) are: What is the day-by-day P&L? How high or lowcould it be? Where does it come from, and what contributes to the variabil-ity of the P&L? We have been focused on the ongoing running of the busi-ness, not on winding down the portfolio.

Such questions, however, questions about the P&L if we were to winddown the portfolio, are important. We should ask questions such as: Underwhat conditions might we wish to substantially alter the composition of theportfolio? Under what conditions might we be forced to wind down theportfolio? What would be the cost of altering or winding down the portfo-lio? What would be the source of those costs, and would those costs changedepending on the trading strategy employed for altering or winding downthe portfolio?

Asking these questions impels us to look at liquidity issues. We alsoneed to change focus somewhat. For asset liquidity, we will focus onquestions such as how much it might cost to completely unwind the port-folio, how long such an unwind might take, and what are optimal meth-ods for executing changes in the portfolio. Our focus is still on the P&Lbut it is possibly over a different time horizon, under circumstances dif-ferent from the day-to-day, normal operations of the business. We arestill asking questions about the P&L distribution but the questions aredifferent from those we ask about standard market risk. It is hardly sur-prising, therefore, that both the tools we use and the answers we get willbe somewhat different.

For funding liquidity risk, we focus on questions such as how the asset-liability structure of the firm might respond to different market, investor, orcustomer circumstances. Once again, we are still interested in the P&L, butwe are not asking what the P&L might be during standard operations buthow it might be affected by the liability and funding structure of the firm.

We could conceive, theoretically, of building one complete, all-encompassing model that would include the P&L when the firm is anongoing business with no big changes in the portfolio, how asset liquidityconsiderations enter when there are big changes or unwinds in the portfolio,and how all this affects or is affected by the asset-liability structure of thefirm. Such a goal may be commendable, but is highly unrealistic. It is betterto undertake a specific analysis that focuses on these three issues separately,and then use these analyses to explore and understand the interaction of thethree types of risk. Essentially, I am arguing for analyzing asset and fundingliquidity risk with a different set of measures and tools from what we use forstandard market or credit risk. It is more fruitful to develop a different set ofmeasures rather than trying to adjust standard volatility or VaR. At thesame time, we want to clearly delineate the relation with standard volatilityand VaR measures.

Liquidity and Operational Risk 483

Page 503: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C12 02/15/2012 10:34:50 Page 484

12 .2 ASSET L I QU I D I TY R I SK

When we turn to asset liquidity risk, the central question is: What might theP&L be when we alter the portfolio? Most importantly, what is the P&Leffect due to liquidity of different assets?

In earlier chapters, when we examined the P&L distribution, we ig-nored any effect of bid-offer spread, the impact on market prices of buyingor selling our holdings, or over what period we might execute a transaction.We assumed that all transactions were done instantaneously and at mid-market. This was fine because our main focus was market movements andwe could afford to ignore transactions costs. Here we change gears and fo-cus primarily on those transactions costs.2

Cos t s and Benefi t s o f Speedy L i q u i d a t i o n

Liquidity and transactions costs generally affect the P&L through twomechanisms. First is the bid-offer spread. Virtually any asset will have aprice at which we can buy (the market offer) and a lower price at which wecan sell (the market bid). When we liquidate, we go from the midmarket(at which we usually mark the positions) to the worse of the bid or offer.Illiquid assets will be characterized by a wide spread. Furthermore, spreadsmay vary by the size of the transaction and the state of the market. A trans-action that is large relative to the normal size or that is executed during aperiod of market disruption may be subject to a wider bid-offer spread.

The second mechanism through which liquidity and transactions costsenter is the fact that market prices themselves may be affected by a largetransaction. Trying to sell a large quantity may push down market pricesbelow what the price would be for a small or moderate-size transaction.

These two effects may be summarized in a price-quantity or priceimpact function:

PðqÞ ¼Pm 1þ kbðqÞð Þ for buyPm 1� ksðqÞð Þ for sell

( )

q ¼ number of shares ð12:1Þ

where q is the quantity bought or sold (say, the quantity transacted inone day). The function (12.1) might look like Figure 12.1 Panel A, wherethe bid-offer spread is $0.50 for quantity up to 50,000 shares but widensfor larger-size transactions—the bid price goes down and the offer price

2Also see Jorion (2007, section 13.2) for a discussion of these asset liquidity issues.

484 QUANTITATIVE RISK MANAGEMENT

Page 504: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C12 02/15/2012 10:34:50 Page 485

A. Price Impact, Spread Widens

B. Price Impact, Sale Lowers Price

$100.0

Price

50,000 Quantity

$99.5

$100.5

100,000 200,000

Market bid price

Market offer price

(shares)

Mid-market price

$100.0

Price

50,000 Quantity

$99.5

$100.5

100,000 200,000

Market bid price

Market offer price

(shares)

Mid-market price

FIGURE 12.1 Price Impact Function—Price and Percent Terms

Liquidity and Operational Risk 485

Page 505: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C12 02/15/2012 10:34:50 Page 486

goes up as the quantity transacted increases. (The change is shown as linear,but this, of course, is not necessary.)

Panel A shows the change in bid and offer prices as symmetric but thisneed not be the case. It may be that a large sale pushes the market pricedown enough to have a significant price impact on both the bid and the offer.This is shown in Panel B, where both the bid and the offer go down in re-sponse to a large sale. We can think of Panel A as a widening of the bid-offerspread, and Panel B as an actual change in the market price. It does not reallymatter in the end whether we think of the impact of a large transaction as awidening of the spread or a changing of the price. If we are selling, it is onlythe price at which we sell, the market bid, that we care about. Whether achange in price is the result of market makers widening the spread or alteringa midmarket price is irrelevant from the perspective of what price we face.

For actual use we may wish to express the price impact in percentageterms, and as a function of the quantity measured in dollars or euros ratherthan number of shares:

pðwÞ ¼þkbðwÞ for buy�ksðwÞ for sell

( )

w ¼ quantity in dollars ð12:2Þ

Such a change, however, is simply a matter of units.Once we have the price impact function, (12.1) or (12.2), we can exam-

ine the cost of liquidating part or all of the portfolio. If we holdW dollars ofthe asset (and using (12.2)), the cost of liquidating the portfolio in one day isW k(W). If we liquidate over n days, selling W/n per day, the cost per dayis (W/n)k(W/n), and the total cost is:

cost of liquidating over n days: W kðW=nÞ

If the price impact function were linear, k(W)¼ k0W, then the cost of liqui-dating would be:

cost of liquidating over n days with linear function: k0W2=n ð12:3Þ

Clearly, the cost of transacting over a shorter period will be higher thanover a long period, but there is a trade-off. Transacting over a longer periodmeans that market prices may move against us, generating market losses.The key is to assess the trade-off between transacting quickly versus slowly.If we transact quickly, we pay a high cost but we avoid the risk that marketprices will move against us and generate a loss on the portfolio. If we trans-act slowly, we pay a lower cost but there is a higher chance markets willmove against us.

486 QUANTITATIVE RISK MANAGEMENT

Page 506: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C12 02/15/2012 10:34:50 Page 487

To measure how much markets may move against us we need to thinkabout the P&L distribution over one day, two days, and so forth. As usual,we assume that P&L is independent from one day to the next, so we can addvariances (which gives the standard square-root-of-t rule for the volatility). Buthere we do not add the original portfolio variances because, if we are sellingoff part of the portfolio each day, the portfolio is decreasing in size day by day.

Let us assume that we transact at the end of a day, so that if we liqui-date in one day, we are subject to one day’s worth of volatility; if we liqui-date in two days, we are subject to two days’ volatility, and so on. If we areliquidating over n days equally, we will be adding:

day one: Variance ¼ s2

day two: Variance ¼ (1 – 1/n)s2

. . .

day n: Variance ¼ (1 – (n – 1)/n)s2

where s is the original portfolio volatility. These terms sum to:

Variance of portfolio liquidated over n days ¼ n

31þ 1

n

� �

1þ 1

2n

� �

s2

Volatility of portfolio liquidated over n days ¼ s

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

n

31þ 1

n

� �

1þ 1

2n

� �

s

ð12:4ÞEquation (12.4) shows us that the volatility of a portfolio that is being

liquidated uniformly over a given horizon grows more slowly than the vola-tility of the original portfolio. A portfolio that is liquidated over 30 days hasroughly the same volatility as the original portfolio held for 10 days.

This is the volatility assuming the portfolio is liquidated evenly over aperiod. We could examine alternate assumptions but the idea remains thesame: the portfolio variance falls over the period, and the total varianceover the period is the sum of the daily variances. In the following I will as-sume even liquidation. This is clearly a simplification but is valuable forillustration and for building intuition.

E va l ua t i n g L i qu i d a t i on over Var i ous Hor i z o ns

We have laid out the cost-of-liquidation equation (12.3), and the effect onthe portfolio volatility equation (12.4) for various horizons. We now havethe building blocks we need and we can turn to evaluating the trade-offs of

Liquidity and Operational Risk 487

Page 507: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C12 02/15/2012 10:34:50 Page 488

liquidation over various horizons. But there is a fundamental issue here. Wecannot directly compare the cost from equation (12.3) versus the volatilityfrom equation (12.4). The cost shifts down the mean of the distribution.The volatility measures the spread of the distribution. One of the first thingswe learn in finance is the difficulty of evaluating the trade-off betweenchanges in mean returns (in this case, the cost of liquidation over varioushorizons) versus variance or volatility (the increase in volatility due to moreleisurely liquidation).

In other areas of risk measurement, we invariably compare P&L distri-butions by comparing their volatilities or VaRs—higher volatility or VaRmeans higher risk. We cannot do that here. In other situations, the distribu-tions have zero mean (or close enough that it doesn’t matter much). We canignore the mean. Here we cannot because the whole issue is that the liquida-tion shifts down the mean of the distribution. We need to look at the distri-butions themselves to appropriately evaluate the trade-off between speedyversus leisurely liquidation.

Simple Example—D7 million CAC Position Let us consider a simple exampleportfolio—a single position of D7 million futures on the CAC equity index($9.1 million at the then-current exchange rate). The CAC futures is actu-ally very liquid and so not a good example, but let us say, for the purposesof argument, that we actually have an illiquid total return swap (TRS). Letus also assume that we have estimated the price impact function (in percent-age terms, equation (12.2)) as:

pðwÞ ¼ �1:099� 10�8 �w w ¼ transaction amount in dollars

In other words, if we sell $910,000 in one day, the price impact is 1 percentand the cost is $9,100. If we sell $9,100,000, the price impact is 10 percentand the cost is $910,000. The cost of selling an amount w is:

costðwÞ ¼ �1:099� 10�8 �w2 w ¼ transaction amount in dollars:

The cost of selling the full position in n days, from equation (12.3) is:

costðwÞ ¼ �1:099� 10�8 �w2=n selling equal amounts over n days

The original daily volatility of this position (the full D7 million or $9.1million) is $230,800. The volatility of liquidating the position over n days isgiven by equation (12.4) with s¼ 230,800.

Table 12.1 shows the costs and volatilities for liquidating this positionover selected horizons. The cost of liquidating in one day is $910,000 whilethe cost over two days is half that, $455,000. The volatility grows according

488 QUANTITATIVE RISK MANAGEMENT

Page 508: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C12 02/15/2012 10:34:50 Page 489

to equation (12.4), from $230,800 for one day to $258,000 for two days,and so on.

Remember that the cost represents a shift down in the mean of the P&Ldistribution and the volatility measures the dispersion of the distribution. Fig-ure 12.2 shows the distributions corresponding to the numbers in Table 12.1.

TABLE 12.1 Costs and Volatility for Liquidating HypotheticalPortfolio over Various Periods

No. Days Cost $ Volatility $

1 910,000 230,8002 455,000 258,0005 182,000 342,300

10 91,000 452,90031 29,350 759,800

A. Distribution for 1 Day

B. Distribution for 1, 2, 5 Days

1 day 2 days

5 days

– 1.5mn – 0.5mn 0.5mn

With costsNo costs

– 1.5mn – 0.91 – 0.5mn 0.5mn

FIGURE 12.2 P&L Distributions forVarious Liquidation Periods

Liquidity and Operational Risk 489

Page 509: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C12 02/15/2012 10:34:50 Page 490

In Panel A, we start with the P&L distribution with no liquidation costs. Thedotted line in Panel A is the P&L distribution for the ongoing business—sim-ply the usual distribution with the usual one-day volatility ($230,800 in thiscase) and zero mean. We are interested, however, in the solid line in Panel Athat shows the distribution for one-day liquidation. The distribution is shiftedto the left by the single-day liquidation costs, giving a distribution with alower mean. The introduction of these liquidation costs shifts the whole dis-tribution to the left, so we have a distribution with the same volatility buta mean of –$910,000. (A vertical line is drawn in to show the mean at–$910,000.)

Now that we have the distribution for one day, we can start to compareacross days. Panel B shows the distributions for one, two, and five days. Thedistribution for one-day liquidation is shifted far to the left (low mean) butwith a relatively small volatility. The distribution for two days is shifted lessto the left, but with a somewhat higher volatility (wider distribution). Thedistribution for five days is shifted even less to the left but is wider (moredispersion) than for the distributions for either one or two days.

With Figure 12.2, we can ask the question: What is the optimal liquida-tion horizon? What trade-off should we choose between the high costs ofspeedy liquidation versus the increased dispersion of leisurely liquidation?

In fact, there is no definitive answer. There is no single number thatgives us the answer, and the definition of a ‘‘liquidity adjusted VaR’’ is sim-ply not appropriate. (See Jorion 2007, 344 for an attempt to define such aliquidity adjusted VaR.) This is a classic situation in which the answer de-pends on the user’s trade-off between mean and volatility. Although we can-not say definitively, however, we can see in Panel B that in this particularcase the one-day distribution is shifted so far to the left relative to the five-day distribution that it would be hard to imagine anyone preferring the one-day over the five-day liquidation. Comparing two days to five days, it wouldalso seem that the five-day liquidation would be preferable—the density forfive days is almost all to the right of the two-day density.3

We can also see from Figure 12.2 that simply comparing VaR acrossdistributions is not appropriate. The vertical lines in Panel B show one stan-dard deviation (1s) below the mean for each of the distributions—the costless 1.0 times the volatility. This will be the 16 percent VaR (assuming fornow that the P&L distribution is normal). In Figure 12.2, the 16 percentVaR for the two-day liquidation is lower than for the five-day liquidation.

3 From Figure 12.2 it might appear that a longer liquidation period is always better.This is not the case. As we liquidate over longer periods the volatility eventually risesmuch more than the cost decreases. This can be seen in Table 12.1.

490 QUANTITATIVE RISK MANAGEMENT

Page 510: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C12 02/15/2012 10:34:50 Page 491

In other words, relying on the 16 percent VaR would point us toward a five-day liquidation, which seems to be the right answer in this case.

Comparing the 16 percent VaR gives the right answer in this case but wecan always go far enough out in the tail of the distribution to get a VaR forthe two-day distribution that is better than the VaR for the five-day distribu-tion. Simply put, because the means of the distributions are different, we can-not blindly rely on VaRs to compare distributions across days. Table 12.2shows the cost less 3.09 times the volatility. This is the 0.1 percent VaR.This 0.1 percent VaR is virtually the same for the two-day and five-day liqui-dation. Relying on the 0.1 percent VaR would seem to indicate that thetwo-day and five-day liquidation horizons are equally good, when examina-tion of Figure 12.2 shows that is clearly not the case.

I s sues around Asse t L i q u i d i t y

Thinking about, calculating, and evaluating asset liquidity risk is difficult.For market risk and for credit risk, there is a well-developed framework.We are concerned with the P&L distribution. We are interested primarilywith the spread or dispersion of the distribution. We can summarize thedispersion in various ways—using, say, the volatility or VaR or expectedshortfall—but these single-number measures generally give a pretty goodidea of the dispersion.

For asset liquidity risk, in contrast, we run into two important issues.First, the framework for thinking about asset liquidity risk is not as welldeveloped and there is not the same depth and range of developed practice.Also, the evaluation of asset liquidity and liquidation over various horizonscannot be easily summarized into a single summary number—we have toconsider the thorny issue of trade-offs between mean and variance.Second, practical issues with implementation, data collection, and calcula-tion are substantial. The balance of this section reviews these two issues, inreverse order.

TABLE 12.2 Costs and Volatility for Liquidating Hypothetical Portfolio overVarious Periods, with 1 Percent VaR

No. Days Cost $ Volatility $ –Cost–Vol –CostþVol –Cost–3.09�Vol

1 910,000 230,800 –1,141,000 –679,200 –1,623,0002 455,000 258,000 –713,000 –197,000 –1,252,0005 182,000 342,300 –524,300 160,300 –1,240,000

10 91,000 452,900 –543,900 361,900 –1,490,00031 29,350 759,800 –789,200 730,500 –2,377,000

Liquidity and Operational Risk 491

Page 511: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C12 02/15/2012 10:34:50 Page 492

Practical Issues around Implementation The central issue for asset liquidityrisk is equation (12.1) or (12.2), the price impact function, from which wecalculate costs of liquidation. I stated that equation (12.1) reflected bid-of-fer spreads and response of market prices to large trades. For the applica-tion of equation (12.1), the actual mechanism that generates changes inprice with transaction size matters less than the values of the function itself.

The essential problem, however, is that estimating any kind of equationlike (12.1) is very difficult, particularly for the illiquid securities where it ismost critical. For market risk, historical prices for securities and market riskfactors are relatively easy to find, and these historical prices are what weneed for estimating the standard market risk P&L distribution, as detailedin Section 8.3 of Chapter 8. For asset liquidity, it is much harder to find datawith which we can estimate a price impact function.

The first problem is that only a subset of assets have good publiclyavailable data on prices and trades. Exchange-traded assets such as equitieshave good data on volumes, trades, bid-offer spreads, prices, and all at vari-ous frequencies from tick-by-tick to daily, weekly, monthly. Over-the-counter instruments, however, make up a large portion of many portfoliosand trade data on such markets is limited.

The second problem is that even when we do have trade data, determin-ing when a price change is due to a large trade versus occurring for otherreasons related to more fundamental factors is difficult. Say we observe atrade that is larger and at a lower price than trades just before it. The lowerprice may be result from a seller trying to unload a large position, pushingthe price down. Alternatively, both the trade and the lower price may be aresult of news or information that both pushes down the market price andinduces some market participant to sell their position.

There are, however, a few simple things that we can do, and which canhave a large effect on our estimates of asset liquidity costs. For exchange-traded assets, we can often estimate the first part of function (12.1), the bid-offer spread for small-to-moderate size (represented in Figure 12.1 by theflat line for 50,000 shares and less) without too much difficulty. We can getstatistics on the bid-offer spread and on the average or median daily tradingvolume. For example, Table 12.3 shows the statistics for trading in commonshares of IBM (NYSE ticker IBM) and in ING 6.125 percent perpetual pre-ferreds (NYSE ticker ISG). (Data are as of October 2011.)

Just this limited amount of information provides valuable insight. ISGis far less liquid than is IBM—we already know that—but these figures pro-vide quantitative measures for this. The bid-offer spread for ISG is muchlarger (in percentage terms) than for IBM. This gives us a start to estimatethe cost of liquidating a position and immediately tells us that transacting inISG will be more costly than transacting in IBM. Both the daily volume and

492 QUANTITATIVE RISK MANAGEMENT

Page 512: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C12 02/15/2012 10:34:51 Page 493

the shares outstanding for ISG are tiny relative to that for IBM. This indi-cates the size of position we might expect to be able to transact easily versuswith more difficulty. Trying to sell a position of $1 million in ISG shouldprobably not be a big issue, but $20 million would be. For IBM, of course,selling $20 million would not be an issue.

Developing such information on normal bid-offer spreads and normaltrade size for nonexchange-traded assets is more difficult. Doing so will gen-erally require a firm to exploit either internal data sources (firm trade re-cords if those are rich enough) or the judgment and experience of traders.

We can think of data on the normal bid-offer spread and the normaltrade size as giving us the first part of the price impact function shown inFigure 12.1—the flat section for 50,000 shares or fewer. With this first partof the price impact function, we can examine the portfolio and determinewhether asset liquidity issues are likely to arise. If all the holdings are lessthan the normal daily trade size and bid-offer spreads are relatively narrow,then liquidating the portfolio in a single day is unlikely to have a large priceimpact. In fact, using the bid-offer spreads we can make a first (minimum)estimate for the cost of single-day liquidation.

If, on the other hand, there are significant holdings that are large relativeto normal daily trade size, then we have to tackle the problem of extendingthe price impact function and evaluating liquidation across different horizons.

In many cases, estimating the section of the price impact function be-yond the flat, bid-offer section (in Figure 12.1, the section for more than50,000 shares) will be a matter of substantial approximation and judgment.The exercise of putting numbers to a price impact function should not lullus into thinking that we have solved the asset liquidity issue. It should, in-stead, push us toward making our assumptions about liquidity more con-crete while also critically examining those assumptions.

TABLE 12.3 Market Statistics for IBM and ISG

IBM ISG

Market Price $ 185 $ 18Bid-Offer Spread—$ $ 0.07 $0.14Bid-Offer Spread—% 0.04% 0.78%Average daily volume (3mth), shares, ’000s 7,100 104Average daily volume (3mth) ($ million) $ 1,313.5 $ 1.9Shares outstanding (million) 1,194 28Shares outstanding ($ million) $221,000 $504

Note: ‘‘IBM’’ is the common shares of International Business Machines. ‘‘ISG’’ is the6.125 percent perpetual preferred debentures for ING.

Liquidity and Operational Risk 493

Page 513: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C12 02/15/2012 10:34:51 Page 494

One final issue regarding the price impact function (12.1) or (12.2). Wehave been treating the function as deterministic with no random variation incosts. This is of course too simplistic. It is fine for a start, but ideally wewould want the costs to be random. We could think of the equation as being:

pðwÞ ¼ ~kðwÞ or pðwÞ ¼ kðwÞ þ ~zðwÞ w ¼ quantity in dollars

In the first equation, ~kðwÞ could be assumed log-normally distributed(so that the percent cost would always be positive, with mean and variancea function of w) or in the second equation, ~zðwÞ could be assumed normallydistributed (as long as z is small relative to k, there would be a low chanceof the cost going negative, and here z would be normal with mean zero andvariance depending on w). When the cost is random, the cost will alter thevolatility of the P&L as well as shifting the mean.

Framework for Evaluating Asset Liquidity I argued earlier that a reasonableway to think about asset liquidity is to treat the cost of liquidation as shift-ing the P&L distribution to the left. Faster liquidation imposes costs thatshift the distribution further to the left, while leisurely liquidation reducescosts but widens the distribution and leads to larger potential trading losses.The problem reduces to choosing a trade-off between costs versus volatility.

The price impact function provides the cost data that form the founda-tion for this analysis. The biggest problem, of course, is that there is consid-erable uncertainty in most estimates of price impact functions. This meansthat we have to be careful in interpreting and using any asset liquidity anal-ysis. Nonetheless, just the exercise of estimating the functions and analyzingthe portfolio can shed considerable light on any asset liquidity issues. Ifnothing else, it can point out whether asset liquidity is likely to be an issuefor the portfolio under consideration.

I argued earlier that an understanding of the trade-offs between quickversus slow liquidation requires considering the full distributions, as shownin Figure 12.2. Nonetheless, considering just the mean (cost) and volatility,numbers such as shown in Table 12.4, can provide considerable insight. Wemust, however, keep in mind that we are really thinking about the distribu-tions such as shown in Figure 12.2.

Table 12.4 shows the cost and volatility for the example discussed ear-lier, but also shows the cost and volatility as a percent of assets. Liquidatingin one day is clearly expensive, with the cost substantially higher than thevolatility. Liquidating over two days dramatically reduces costs while notincreasing volatility dramatically. To me, liquidating over something like 5or 10 days seems reasonable, while waiting for 31 days seems to increasevolatility too much relative to the reduction in costs.

494 QUANTITATIVE RISK MANAGEMENT

Page 514: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C12 02/15/2012 10:34:51 Page 495

A further benefit of having the costs laid out as in Table 12.4 is that wecould estimate the reserves or capital that might be required to withstandlosses from liquidating. The cost would be an expected loss, while we wouldneed to add an additional amount to protect against unexpected losses—saythe 1 percent VaR. Note that here we are adding the expected costs to the 1percent VaR, but the interpretation is not a liquidity-adjusted VaR. As Iargued before, such a concept is not sensible. We are asking instead whatreserve we might want to take, accounting for both expected losses (thecosts) and unexpected losses (the VaR as a deviation from the mean).

One final issue deserves mention regarding calculation of volatilitiesover a liquidation horizon. In deriving equation (12.4) we assumed thatthe portfolio was unchanged during liquidation, apart from being reducedby 1/n each day. This may be a reasonable assumption, and certainly is usefulas a base case and to help build intuition. In practice, however, more liquidassets would probably be liquidated more rapidly. This could easily beaccommodated by calculating the volatility of the projected portfolio day byday, accounting for what assets would be liquidated quickly versus slowly.

Such an exercise could be quite valuable in its own right by potentiallyhighlighting problems with unwinding offsetting hedges. Say, for example,that in the original portfolio, a long position in an illiquid equity is hedgedwith a liquid equity index futures. If the equity index futures were sold offearly and the stock itself sold off slowly there might be a large and un-intended increase in portfolio volatility due to the illiquid equity being leftunhedged.

Conc l us i o n

Asset liquidity focuses on the asset side of the balance sheet, and particu-larly on the cost of liquidating positions. These costs can be quite differentacross assets, but these costs can be quite difficult to estimate.

TABLE 12.4 Costs and Volatility for Liquidating Hypothetical Portfolio overVarious Periods

No. Days Cost $ Volatility $ Cost % Volatility %

1 910,000 230,800 10.0% 2.5%2 455,000 258,000 5.0% 2.8%5 182,000 342,300 2.0% 3.8%

10 91,000 452,900 1.0% 5.0%31 29,350 759,800 0.3% 8.3%

Liquidity and Operational Risk 495

Page 515: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C12 02/15/2012 10:34:51 Page 496

This section has argued that the appropriate way to assess asset li-quidity risk is to compare liquidation strategies across different horizons.Fast liquidation leads to high costs but avoids potential losses resultingfrom market movements. Leisurely liquidation reduces costs but leavesthe portfolio open to possible losses if the markets move against the port-folio. To properly evaluate fast versus leisurely liquidation, we need torecognize that we have to decide on a trade-off between expected costsversus the volatility of market movements; simply calculating a liquidityadjusted VaR as the sum of standard VaR plus costs mixes apples andoranges.

12 .3 FUND ING L I QU I D I TY R I SK

We now turn from a focus on the asset side of the balance sheet to look atthe liability side. Quantitative risk measurement is mostly concerned withstatistics, probability, and mathematics. But as I have tried to emphasizethroughout this book, risk management is about managing the firm, doingwhatever it takes, using whatever tools and techniques are available andnecessary, to understand and manage the risk. Funding liquidity risk is aprime case of when we do not necessarily need fancy mathematics; we needinstead common sense and attention to details.

Funding liquidity focuses on the sources of funds. Risk managementand risk measurement generally focus on the uses of funds, the investments,and assets held. Funding and the liability side of the balance sheet are notthe natural province of most risk professionals. Funding more naturally fallsunder the CFO or Treasury function rather than trading. Having said that,funding liquidity is critically important. During a crisis, it is often the liquid-ity issues that bring down a firm. To mention only a few instances, LTCM,Metallgesellschaft, and Askin Capital Management were all subject to se-vere liquidity issues. Such problems become paramount and industry-wideduring a liquidity crisis such as the U.S. subprime-triggered crisis of 2007–2009 and the eurozone crisis that began in 2011.

What is funding liquidity risk? Simply stated, it arises from mismatchesbetween assets and liabilities. Not mismatches in the value (when assets areworth less than liabilities that becomes an issue of solvency) but rather mis-matches in timing. It is often hard to separate solvency issues from liquidityissues, and liquidity problems can morph into solvency issues, but concep-tually we want to keep them distinct.

The history of banking and finance is the story of aggregation and inter-mediation. We can go back to Bagehot’s Lombard Street from 1873 to seethat finance has long been a means for taking funds from depositors or

496 QUANTITATIVE RISK MANAGEMENT

Page 516: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C12 02/15/2012 10:34:51 Page 497

investors and channeling those funds to more profitable uses, to entrepre-neurs or companies.

A million in the hands of a single banker is a great power. . . . Butthe same sum scattered in tens and fifties through a whole nation isno power at all. . . . Concentration of money in banks . . . is theprincipal cause which has made the Money Market of England soexceedingly rich.

In this constant and chronic borrowing, Lombard Street[London’s 19th-century Wall Street] is the great go-between.(Chapter I)

But the aggregating and channeling of funds invariably entails a mismatchbetween the liabilities owed to investors or depositors and the assets invested.This is the case whether we are looking at a traditional bank or a hedge fund,but it is easiest to see with traditional banking. A bank aggregates retail depositsand channels these deposits to mortgage loans, commercial loans, or whateverother assets in which it invests. The deposits are demand deposits, redeemableupon demand. The loans are long-term, months or years in duration.

We are abstracting from solvency issues, so we assume that assets aregood and there is no excessive risk of default or other losses on the loans.But say that, for some reason, depositors all demand repayment. The bankis ruined in such a situation. There is no possibility that a bank can pay backall depositors immediately because the assets do not mature for a considera-ble time and the assets are not liquid enough to be quickly sold. There is afundamental mismatch between assets and liabilities.

Financial firms other than banks are exposed to similar funding orasset-liability mismatches. An investment firm takes investor funds andinvests in market securities. These market securities will generally be moreliquid than bank loans, but the redemption terms for investors will oftenhave a shorter term than the duration or term of the assets, and the assetswill not be liquid enough to allow immediate liquidation.

Leveraged investments, wherever they are housed, will always be sub-ject to funding liquidity risk. The money borrowed to fund a leveraged posi-tion will be short term while the assets are longer in duration. Consider abond repurchase agreement or repo—funding the purchase of a bond byborrowing the purchase price and posting the bond itself as security for theloan. The repo agreement, and thus the funds borrowed, will almost alwaysbe short-term: overnight, maybe monthly. Repo arrangements will usuallyrequire a so-called haircut, in which only a fraction of the bond price can beborrowed. The haircut might be 2 percent or 5 percent so that a firm canborrow 98 or 95 percent of the purchase price. During times of market

Liquidity and Operational Risk 497

Page 517: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C12 02/15/2012 10:34:51 Page 498

disruption, or when a firm comes under pressure, the haircut might beraised. Since the repo agreement is short-term, this can be done quickly. Ifthe haircut goes from 5 percent to 10 percent, then the cash required tomaintain the bond position doubles—a classic funding liquidity problem.

Measuring and managing funding liquidity issues reduces to good asset-liability management. This is not the central focus of this book but we canlearn something by focusing on how such asset liability analysis might workfor a commercial bank.

F ramework f or Fund i ng L i q u i d i t y—Trad i t i o na l Bank i ng

Funding liquidity risks arise regularly in traditional banking. This sectionfollows Marrison (2002, ch. 14) in the discussion of asset liability manage-ment within a commercial bank. This discussion provides a framework forthinking about and measuring funding liquidity risk.

As discussed before, a bank has a preponderance of short-maturityliabilities. These will be demand deposits such as checking deposits but willusually also consist of short-term funding raised on the wholesale markets,from institutional investors. Banks take these short-term funds and investthe bulk in long-dated and illiquid assets such as commercial loans. Therewill be random variations in the demands by retail investors for cash, andminor fluctuations in the price and availability of wholesale funds, but thesefluctuations will generally be minor. A bank will hold cash and otherreserves to satisfy these random fluctuations.

At rare times, however, customer demands for cash or the availabilityof wholesale funds will change dramatically. This might be because there isa rumor that the bank is in trouble or it could be a systemic problem thatpushes a large proportion of customers to demand cash and counterpartiesto stop supplying wholesale funds. For whatever reason it occurs, such achange will push the bank into a funding liquidity crisis. The funding prob-lem will then become self-fulfilling, since once a funding problem starts,more customers will demand cash and fewer counterparties will lend in thewholesale markets.

Cataloging Sources and Uses of Funds Measuring and managing fundingliquidity comes down to measuring and managing the bank’s inflows(sources of funds) and outflows (uses of funds).4 We go about this in twostages. First, we define and classify the bank’s sources and uses of funds.

4As mentioned earlier, this section follows Marrison (2002, ch. 14).

498 QUANTITATIVE RISK MANAGEMENT

Page 518: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C12 02/15/2012 10:34:51 Page 499

This gives us a framework for measuring the net cash position and foridentifying the likely size and sources of fluctuations. Second, we con-sider three regimes, or sets of conditions, that lead to three sets of fund-ing requirements: normal conditions with normal fluctuations that leadto expected funding requirements; unusual conditions with large fluctua-tions that lead to unusual funding requirements; and extreme conditionswith extraordinary fluctuations that lead to crisis funding requirementsand Economic Capital.

To lay out the framework for sources and uses of funds, we classifypayments into four categories: scheduled payments, unscheduled pay-ments, semidiscretionary payments, and discretionary or balancingtransactions. Typical flows for a bank falling into these four categoriesmight be:

& Scheduled payments—previously agreed or contracted payments thatcannot be changed easily or quickly. Examples would include:& Outflows or uses of cash ¼ OS: loan disbursements; repayments to

customers such as maturing CDs; loan repayments to other banks;bond coupons.

& Inflows or sources of cash ¼ IS: payments from customers such asloan repayments.

& Unscheduled payments—arising from customer behavior& Outflows or uses of cash ¼ OU: repayments to customers, such as

checking-account withdrawals; loan disbursements on things likecredit cards and lines of credit; payments to corporations such asstandby lines of credit.

& Inflows or uses of cash ¼ IU: payments by customers such as depositsinto checking accounts.

& Semidiscretionary payments—payments that occur as part of the bank’strading operations but which can be changed without undue difficulty& Outflows or uses of cash ¼ OSD: purchases of securities; outgoing

cash collateral.& Inflows or sources of cash ¼ ISD: sale of trading securities; incoming

cash collateral.& Discretionary or balancing transactions—carried out by the funding

unit to balance daily cash flows.& Outflows or uses of funds ¼ OB: lending in the interbank market;

increase in cash reserves.& Inflows or sources of cash ¼ IB: borrowing in the interbank market;

calls on standby lines of credit with other banks; drawdown of cashreserves; borrowing from the central bank (the Federal Reserve) atthe discount window (only in grave circumstances).

Liquidity and Operational Risk 499

Page 519: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C12 02/15/2012 10:34:51 Page 500

Using this classification, we can write down the net balancing transac-tions necessary to balance the bank’s daily cash sources and uses. The netbalancing transactions (measured as the cash that must be raised) will bethe sum of outflows less the sum of inflows:

NB ¼ ðOS þOU þOSDÞ � ðIS þ IU þ ISDÞ ð12:5Þ

The scheduled terms are known and it is useful to group the randomcomponents:

R ¼ ðOU þOSDÞ � ðIU þ ISDÞ

so that we can write the net balancing transactions as:

NB ¼ ðOS � ISÞ þ R ð12:6Þ

We can model the random term R as normally distributed with mean mR

and standard deviation (volatility) sR.

The Funding Distribution and Funding Requirements So far, we have donenothing more than define accounting relations. Doing so, however, orga-nizes the data and focuses attention on the critical aspects of funding liquid-ity. It also allows us to think about funding liquidity risk in exactly the sameway we have thought about market or credit risk: here focusing on the fund-ing distribution. In equation (12.6), we are treating the net funding require-ments or balancing transactions as a random variable, and so we can thinkabout the distribution of the net funding. We can use the same tools andtechniques that we applied previously: estimate the distribution and thenuse the distribution to calculate how likely we are to have a large positiveor negative funding requirement.

For funding, it is useful to think about the funding requirements underdifferent conditions: expected funding requirements, unusual funding re-quirements, and crisis funding requirements.

Expected funding requirements: This is easy conceptually, simply thescheduled payments plus the average (expected value) for all other pay-ments:

NExp ¼ ðOS � ISÞ þ mR

This will include scheduled payments such as promised loan payments(both incoming from corporate loans and outgoing repayments on loans thebank has taken out), coupon payments, new loan originations, and so forth.

500 QUANTITATIVE RISK MANAGEMENT

Page 520: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C12 02/15/2012 10:34:51 Page 501

It will also include expected levels and changes in unscheduled items (suchas checking-account balances) and semidiscretionary items (such as pur-chases of government bonds).

One important point—the analysis of expected funding requirementsmust extend out for some period into the future. Cash inflows and outflowswill vary nonsynchronously over time. For example, a large new loan dis-bursement occurring on a specific date will imply a large cash outflow thatmust be funded. Tracing the expected funding requirements out into thefuture will highlight potential cash flow mismatches, in either size or timingor both.

This exercise will not, of course, be easy. It requires considerable datacollection and analysis. Marrison points out that a detailed daily model ofchecking-account balances would probably show that personal checkingbalances would vary over a month as individuals draw down during amonth, then replenish as wages are paid. This approach does show us wherewe should direct our effort: toward measuring the scheduled and expectedcash flows.

In thinking about the distribution of the funding requirement,NB, it is use-ful to consider Figure 12.3. The actual funding requirement will be random.The expected funding requirement is shown in Panel A—the mean of the distri-bution. In Figure 12.3, this is above zero, but it could be above or below.

One important consideration regarding expected funding is that it mayvary considerably day by day, since there may be big incoming or outgoingcash flows on particular days. For example, a new loan could be scheduled,and this would involve a large cash outflow. Treating such issues is part ofstandard asset liability or cash flow management.

Unusual funding requirements: The next step is easy conceptually, sim-ply going out into the tail of the distribution:

NUnus ¼ ðOS � ISÞ þ mR þ 2sR

Here we go two standard deviations, and this should cover roughly 98 per-cent of the cases—the funding should be this high or worse roughly twodays out of 100. There is nothing sacred about two standard deviations, butit is a reasonable assumption for unusual funding requirements. Figure 12.3Panel B shows what this looks like in terms of the funding distribution.

As pointed out earlier, estimating the distribution of the net funding isnot an easy task. Analyzing the cash inflows and outflows and estimatingthe distribution, however, provides valuable information on exactly howand why the funding requirements may vary. It also provides informationon the amount and type of discretionary funding that might be necessary tosupport unusual funding requirements.

Liquidity and Operational Risk 501

Page 521: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C12 02/15/2012 10:34:51 Page 502

Crisis funding requirements and economic capital: The final step, esti-mating funding requirements during a liquidity crisis, is more difficult. Thenatural inclination would be to go further out in the tail of the normal dis-tribution, so going out maybe 3 or 3.5 standard deviations. This would beassuming that the factors driving funding requirements during a crisis arethe same as during normal times, just more severe. This is often not thecase. Thinking in regard to Figure 12.3, moving out further into the tailwould be assuming that the distribution is normal, whereas in reality thedistribution for extreme events is probably very non-normal—probably askewed and fat upper tail.

Marrison (2002, 207 ff) provides an alternative, and very reasonable,approach to analyzing the funding requirement during a crisis. There are

Expected

A. Expected Funding Requirement

Zero

Unusual

B. Unusual Funding Requirement

Zero

FIGURE 12.3 Distribution of Net Funding forHypothetical Bank

502 QUANTITATIVE RISK MANAGEMENT

Page 522: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C12 02/15/2012 10:34:52 Page 503

two steps. First, we go back to the underlying cash flows and modify thoseto reflect how customers and counterparties might behave during a crisis.This would give a better estimate of what cash requirements might actuallybe during a crisis. Second, based on this cash requirement, we work out howthe bank would have to respond, what liquid and illiquid assets would haveto be sold to generate this cash, and how much of a loss this would generate.This loss would then provide a guess at the economic capital that would benecessary to survive such a funding liquidity crisis.

The first step is to modify the cash flows. During a crisis, it would bereasonable that the bank will make all scheduled payments. Most scheduledinflows will occur but there will be some proportion of defaults. Therewould probably be no unscheduled inflows (customers will themselves behoarding cash) and unscheduled outflows will be some multiple of the usualstandard deviation.5 Such modifications might give a reasonable estimate ofthe cash required during a crisis.

The second step is to work out how the bank would generate this cash,generally by selling assets. Liquid assets can be sold first, but eventually il-liquid assets will have to be sold at a discount to the book or current marketvalue, generating a loss for the bank. The key step here is to make a list ofthe assets that might be sold, together with an estimate of the discount atwhich they would sell during a forced liquidation. Such estimates may besubjective and open to error, but they at least provide some basis for esti-mating the potential loss.

Table 12.5 shows such a list for a hypothetical bank, together with theloss that might be suffered from a forced liquidation. Cash suffers no

TABLE 12.5 Losses Due to Asset Liquidation in a Liquidity Crisis

Assets

Value

($bn)

Cum.Value

($bn)

Fire-Sale

Discount

Cum.Realized

Value ($bn)

Loss

($bn)

Cum.Loss

($bn)

Cash 1 1 0% 1.00 0.00 0.00Treasuries 10 11 1% 10.90 0.10 0.10High-Grade Corp Bonds 5 16 5% 15.65 0.25 0.35Equities 10 26 7% 24.95 0.70 1.05Low-Grade Corp Bonds 15 41 15% 37.70 2.25 3.30Corporate Loans 25 66 35% 53.95 8.75 12.05

5 This shows the benefit of defining and analyzing the cash flows as in equations(12.5) and (12.6). By collecting and analyzing the cash flows for normal times, wehave at hand estimates of the usual flows plus estimates of the usual variation.

Liquidity and Operational Risk 503

Page 523: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C12 02/15/2012 10:34:52 Page 504

discount, Treasuries a small discount, while additional assets suffer increas-ingly steep discounts for liquidation. If the analysis in the first step showedthat $15.65 billion of additional cash would be required during a crisis, thebank could expect to suffer a $350 million loss during such a crisis. Thiswould be an estimate of the economic capital required to sustain the busi-ness through such a crisis.

Liquidity Risk Management Everything so far has focused on liquidity riskmeasurement, not management. The measurement is extraordinarily valu-able but is only the first step. As I have emphasized throughout this book,the goal of risk management is actually managing the risk. It is to that taskthat we now briefly turn.

The risk measurement is important, and it is important for two reasons.First and most obviously it provides concrete and usable information that wecan use to manage the risk. But second, and equally important, it provides thefoundation and framework for digging deeper into funding liquidity risk. Thedata behind a table like 12.5, the classification of sources and uses embeddedin equations (12.5) and (12.6) and the data behind that classification, providesthe details necessary to build the contingency plans for managing funding li-quidity before a crisis and the action plans to manage during a crisis.

With these data and the framework a bank can make better decisions.Marrison (2002, 209) lays out some of the steps a bank could take to alterits funding liquidity profile:

& Increase the proportion of long-term versus short-term funding by bor-rowing long-term funds in the interbank market or issuing bonds.

& Borrow additional funds long-term, investing the proceeds in liquidassets that could be sold or pledged during a crisis.

& Establish standby lines of credit that could be called upon during acrisis.

& Limit the proportion of funds lent long-term in the interbank market.& Reduce the liquidity of liabilities, for example, by encouraging custom-

ers to deposit into fixed-term deposits rather than on-call savingsaccounts or demand deposits (say, by paying a higher return on fixed-term deposits).

All of these actions, however, come at a price. The yield curve is usuallyupward-sloping so that borrowing a larger proportion of funds long-termrather than short-term would increase costs while lending a larger propor-tion short-term will decrease income. The increased cost has to be traded offagainst the benefits of more stable funding, and potentially lower economiccapital held against the potential of a liquidity crisis.

504 QUANTITATIVE RISK MANAGEMENT

Page 524: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C12 02/15/2012 10:34:52 Page 505

Funding Liquidity for Other Organizations The general ideas outlined here fora commercial bank can be applied to most other organizations. For exam-ple, a long-only investment manager could follow the same classification ofsources and uses of funds, but much simplified:

& Scheduled payments—would not apply since there would be no ana-logues of loan disbursements or loan payments.

& Unscheduled payments—arising from customer behavior.& Outflows or uses of cash ¼OU: redemptions by customers.& Inflows or uses of cash ¼ IU: new investments from existing or new

customers.& Semidiscretionary payments—the bulk of cash flows since most of the

firm’s activity is trading that can be changed without undue difficulty.& Outflows or uses of cash ¼OSD: purchases of securities.& Inflows or sources of cash ¼ ISD: sale of trading securities.

& Discretionary or balancing transactions—to balance daily cash flows,borrowing or lending from a bank to balance daily redemptions.

Estimating customer inflows and outflows can be quite difficult, but fromthis perspective there is no conceptual difference between the frameworkwe would apply to a bank and that applied to other organizations.

There are three issues, however, that we have not discussed but thathave a substantial impact on funding liquidity. First, leverage adds a newdimension to the analysis. Second, derivatives will add additional future cashflows, and these cash flows will often be contingent, and consequently moredifficult to estimate. Third, mark-to-market and collateral issues (usually forderivatives contracts) introduce complications and an interaction betweenmarket movements and funding liquidity. We now turn to these issues.

L everage

Leverage is the other major source, apart from the duration transformationof traditional banking, for funding liquidity problems. Traditional bankinginvolves taking short-duration deposits and transforming them into long-duration assets. Put simply, funding liquidity problems arise when theshort-duration deposits disappear and the long-duration assets cannot befunded. With leverage, long-term assets are bought using (usually short-term) borrowed money, and funding liquidity problems arise for very muchthe same reason. If the short-term borrowed money disappears, the assetscannot be funded and there is a funding liquidity problem.

To assess the impact of leverage and funding liquidity risk, we canundertake much the same asset-liability analysis as for a bank. We go

Liquidity and Operational Risk 505

Page 525: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C12 02/15/2012 10:34:52 Page 506

through the exercise of classifying sources and uses of funds. Short-termborrowing used in financing longer-term assets is routinely renewed orrolled over; funding liquidity problems arise when it is not renewed. Thecrux of the issue is that the repayment of an expiring short-term loan is ascheduled cash outflow—the payment is an obligation. The renewal of theloan, in contrast, is not an obligation and so is an unscheduled cash in-flow—to be treated as a scheduled payment, it would have to be guaran-teed, in which case it would not in fact be a short-term loan.

The analysis of funding requirements under crisis conditions outlinedearlier involves continuing all scheduled payments but setting unscheduledinflows to zero. In the present context, this means assuming some or all ofthe short-term loans do not renew. Projecting this stressed cash flow analy-sis into the future shows how much and at what dates the funding shortfallswould be expected to occur.

In practice, the leverage in financial businesses is often in some formother than unsecured borrowing. One particularly common form is throughsecured funding or repo transactions.

The legal details of bond repurchase (repo) transactions can be some-what complicated, but the end product is equivalent to a secured loan. Afirm agrees to borrow cash and give the bond as security for the loan. (Arepo transaction involves borrowing cash and lending the security and, con-fusingly for our purposes, is often referred to as a lending transaction—thesecurity is lent.) Repo transactions commonly incorporate a haircut throughwhich the cash borrower cannot borrow the full value of the bond—thecash borrowing is overcollateralized. The haircut for a U.S. Treasury bond,the least risky form of repo transaction, might be 2 to 3 percent (so 98 to 97percent of the value of the bond can be borrowed), while for corporatebonds it could range much higher, on the order of 20 percent.

The repo market is huge and is a major source of funding for the securi-ties industry. The market for U.S. Treasury repos is deep, liquid, and active,but there are also markets for other securities, from corporate bonds tomortgage securities to equities (where the market is termed securities lend-ing rather than repo). A large portion of the agreements are overnight repos,in other words, one-day borrowing, but a repo may be agreed for term, any-where from a few days to months.

A repo transaction is a secured loan, and as such, safer than an un-secured transaction. As a result, repo agreements are less likely to be can-celed or not renewed. During a systemic liquidity crisis, lenders will oftenincrease the haircut (which increases the security of the loan) rather thancancel the repo.

We can apply this insight to the analysis of crisis funding conditionsdiscussed earlier. In the framework of cash inflows and outflows we might

506 QUANTITATIVE RISK MANAGEMENT

Page 526: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C12 02/15/2012 10:34:52 Page 507

want to consider a repo more akin to long-term funding, with the borrow-ing and repayment both considered scheduled payments. During a crisis, achange in haircut would be an unscheduled cash outflow. We could estimatepossible changes in haircut on a security-by-security basis. The haircut onU.S. Treasuries would change little. The haircut on a low-grade corporatebond could change substantially. The estimated changes in haircuts and re-sulting increased cash outflows under crisis funding conditions would give abetter estimate of the possible changes in funding.

To summarize how leverage affects funding liquidity, we can still applythe framework Marrison lays out for commercial banks. We take leverageinto account by determining the dates of cash inflows and outflows forshort-term borrowing. For nonsecured funding the restriction or cancella-tion of funding would involve a fall in unscheduled (but expected) cashinflows. For repo funding, the increase in haircuts would involve an increasein unscheduled cash outflows. Once we have delineated the expected fund-ing requirements (under normal and crisis conditions), we can considerwhether it is necessary or worthwhile to alter the profile.

Der i v a t i v es

Derivatives introduce two complications to considerations of funding liquid-ity. First, future cash flows can be difficult to estimate. Second, mark-to-market and collateral can complicate funding liquidity calculations and intro-duce a connection between market movements and funding liquidity. Cashflows are discussed here, mark-to-market and collateral in the next section.

Derivatives produce future cash flows, and so in many respects, are nodifferent from a bond or other security. The cash flows would fall in thescheduled payments category of the payments classification scheme laid outearlier. The complexity introduced by derivatives is that the cash flows areoften unknown or contingent, making the estimation of the cash flowsdifficult.

Consider an interest rate swap to receive fixed 5 percent and pay float-ing for two years. The fixed payments will be $2.5 every six months (for$100 notional) and are known. They are shown in Figure 12.4, representedby the upward-pointing arrows. The floating rate payments are set equal toLIBOR and are reset every three months; beyond the first reset, the exactamounts will be unknown (although they can be estimated from the for-ward yield curve). Such a swap is shown in Figure 12.4. This swap presentstwo issues: first, the cash flows are mismatched and so will produce sequen-tial inflows and outflows (floating outflows occur every three months versusfixed inflows every six months); and second, future floating payments arenot known today.

Liquidity and Operational Risk 507

Page 527: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C12 02/15/2012 10:34:52 Page 508

More exotic derivatives such as options and credit default swapsare even more difficult, as the amount and possibly the timing of thecash flows are unknown and can vary dramatically as market condi-tions change.

It is often said that derivatives are leveraged instruments. In a sense,this is true but the leverage is not the form discussed in the earlier sec-tion—short-term borrowing supporting purchase of long-term assets—that is subject to the withdrawal of short-term funding and funding li-quidity problems.

Derivatives are contracts whose payout depends on (is derived from)other market prices or events. Derivatives generally do not involve anup-front purchase or investment. Derivative contracts, by their very na-ture, usually cannot. Consider the swap above, or a classic agriculturalfutures contract such as wheat or a financial futures contract such as anequity index. A trader can go either long or short, can either buy or sellthe wheat or the equity index. Although there is a natural underlying no-tional amount, there is no up-front value, so no payment to be madefrom buyer to seller or vice versa. Buying or selling does not actually in-volve doing either. The futures contract is simply the agreement to buyor sell in the future at a price agreed today. In the interim there may bethe obligation to pay the mark-to-market value of the difference betweenthe originally agreed price and the current market price, but this may beeither positive or negative, and at initiation, the expectation is this mark-to-market will be zero.

Floating Coupon(initially set today thenreset every quarter)

Fixed Coupon(e.g., $5/year)

$2.5 $2.5$2.5 $2.5

FIGURE 12.4 Two-YearSwap with Semiannual Fixedand Quarterly Floating CashFlows

508 QUANTITATIVE RISK MANAGEMENT

Page 528: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C12 02/15/2012 10:34:52 Page 509

Mark - t o -Marke t and Marke t / Cash Vo l a t i l i t y

Derivatives contracts do embed leverage in the sense that an investor canobtain market exposure without investing the notional amount.6 The im-pact of this leverage on funding liquidity is quite different from that ofleverage discussed earlier. The impact for derivatives comes about throughmark-to-market cash calls and collateral calls. This produces a linkage be-tween market movements and funding liquidity that can sometimes be quitedramatic. Analysis of such market moves requires thinking about what wemight call themarket/cash distribution.

Exchange-traded derivatives such as futures contracts require that anyprofit or loss be paid daily (through what is called variation margin). Thismechanism helps control counterparty credit exposure and has been inte-gral to futures markets since their inception. Over-the-counter (OTC) deriv-atives such as interest rate swaps or credit default swaps have, to date,generally not involved regular mark-to-market payments. Market practicehas evolved, however, so that almost all OTC derivatives involve the post-ing of collateral to cover mark-to-market movements. The collateral pro-vides the means by which the party who is owed money can collect shouldthe other side default.

Whatever the exact mechanism, whether variation margin or postingcollateral, most derivatives contracts entail cash inflows or outflows in re-sponse to movements in market prices. Repo contracts, discussed in a pre-ceding section on leverage, also generally involve regular mark-to-marketor collateral calls in response to price changes, and so will respond in thesame way.

The result is that in the framework for classifying cash flows discussedearlier, the unscheduled payments, both inflows and outflows, will dependon market movements. The first inclination might be to include the P&Lvolatility, estimated as discussed in Chapter 8, Section 8.3, into the volatil-ity of the random term R in equation (12.6).7 This is not appropriate be-cause only some market price movements generate cash inflows andoutflows. We need instead to define a new measure, what we might call the

6The initial margin required for a futures contract is not a payment for the contractbut a sum held by the exchange to ensure and facilitate the payment of daily mark-to-market amounts. This initial margin does serve to limit the leverage an investorcan obtain through the futures contract, but the initial margin is a mechanism tomanage and control counterparty credit exposure rather than an investment in orpayment for an asset.7Remember that R is the sum of unscheduled and semidiscretionary cash flows: R ¼(OU þOSD) – (IU þ ISD), assumed to be random, for example, normal with mean mR

and volatility sR.

Liquidity and Operational Risk 509

Page 529: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C12 02/15/2012 10:34:52 Page 510

market/cash distribution. This is the distribution of cash flows generated bythe distribution of market risk factors. It differs from the P&L distributionwe have worked with in prior chapters because it is only the cash generatedby market moves that enter.

To build the market/cash distribution, we need to build the distributionof cash flows resulting from market movements in a manner similar to thatof Chapter 8, Section 8.3. Remember that in Section 8.3 there were foursteps, the first being asset/risk factor mapping, which transformed from in-dividual assets to risk factors. This first step is all that is changed in buildingthe market/cash distribution.

In Section 8.3, the transformation from assets to risk factors involvedcalculating the mark-to-market P&L that resulted from changes in marketrisk factors. The change here is that we need to calculate the cash flow re-sulting from changes in market risk factors rather than the mark-to-market.This requires a somewhat different focus from that for standard mark-to-market. We need to go through all instruments, derivatives contracts in par-ticular, and determine which will generate cash flows and under whatconditions.

In analyzing contracts to determine cash flows resulting from marketmovements, futures contracts are relatively simple: a new contract requiresinitial margin up front, and existing contracts generate cash flows equal tomark-to-market profit or loss. OTC contracts are more difficult, since dif-ferent contracts and counterparties usually have different terms and condi-tions. A contract will sometimes involve two-way margining (collateralpassed from each counterparty to the other) and sometimes only one-waymargining.8 There are often thresholds so that collateral is passed onlywhen the mark-to-market is above the threshold. The details of each con-tract must be collected and the effect of changes in market prices on cashflows modeled.

One issue that is particularly important and deserves special mention iswhen one set of contracts involve cash flows upon mark-to-market whileanother, similar or related set of contracts, do not. This might occur whenan OTC contract that does not entail collateral calls is hedged with a futurescontract with cash flows (variation margin). The example of Metallgesell-schaft discussed further on highlights this issue.

Once we have the market/cash distribution, we can combine this intothe random cash flows R in equation (12.6) and then evaluate the funding

8On swaps contracts broker-dealers generally require commercial customers to postcollateral from the customer to the dealer, but often insist that the dealer not berequired to post collateral to the customer.

510 QUANTITATIVE RISK MANAGEMENT

Page 530: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C12 02/15/2012 10:34:52 Page 511

requirements under the three sets of conditions: expected requirements, un-usual requirements, and crisis funding requirements.

Add i t i o na l Remarks Regard i n g Fund i ng L i q u i d i t y

The analysis of funding liquidity is difficult. Theory and practice are not aswell developed as for market risk, credit risk, or even operational risk. Thetopic is critically important nonetheless. I will conclude with short remarkson two topics. First, an anecdote related to the trading losses of Chapter 4that emphasizes the importance of funding liquidity. Second, some remarkson the systemic nature of liquidity crises, which highlights why understand-ing and managing liquidity risk is particularly difficult.

Metallgesellschaft Funding liquidity problems played a central role in Met-allgesellschaft’s $1.3 billion loss in 1993. Most important was the variationmargin, or mark-to-market cash calls, from one side of a hedge strategy thatwere not matched by cash or collateral calls on the other side.

Metallgesellschaft was a German industrial conglomerate, Germany’s14th-largest industrial company with 58,000 employees. The Americansubsidiary, MG Refining & Marketing (MGRM), offered customers long-term contracts for oil products. MGRM undertook a strategy to hedge thelong-dated fixed-price oil delivery contracts using short-dated futures andOTC swaps (essentially buying a stack of near-contract futures).

Although problematic, providing only a partial hedge, the strategy wasnot fatally flawed as a hedging strategy, per se. It did suffer from potentiallysevere funding liquidity problems. Oil prices moved in such a way that thevalue of the long-dated customer contracts moved in MGRM’s favor. Therewere no collateral arrangements for those contracts so MGRM made un-realized profits but generated no cash. The short-dated futures, however,lost money and those losses had to be settled up front through cash pay-ments to the futures exchanges.9 To make matters worse, German account-ing standards did not allow the offset of unrealized profits on the customercontracts against realized losses on the futures.

When MGRM called for cash from the parent, the parent replaced seniormanagement at MGRM and liquidated the futures contracts. There is debateabout how much of the ultimately reported $1.3 billion loss was a result ofthe poorly designed hedge versus untimely unwinding of the strategy. What is

9The hedge strategy was by no means perfect so that the losses on the futures werenot fully offset by gains on the customer contracts. The main issue here, however, isthe asymmetry of the up-front cash paid on the exchange-traded futures versus nocash or collateral transfers for the customer contracts.

Liquidity and Operational Risk 511

Page 531: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C12 02/15/2012 10:34:52 Page 512

absolutely evident, however, is that even if the hedge were perfect, it requiredsuch large cash payments that the strategy was probably not viable.

The Systemic Nature of Liquidity Crises Managing liquidity risk is particu-larly difficult because liquidity issues are so often and so closely associatedwith systemic or macroeconomic credit and liquidity crises. Liquidity criseshave occurred and recurred over the centuries. When reading reports of thepanic and market disruption associated with such crises, the events of the1700s, 1800s, and 1900s sound little different from the flight to quality thatwe see in today’s liquidity crises—everybody wants cash or liquid assets:

This . . . occasioned a great run upon the bank, who were nowobliged to pay out money much faster than they had received it . . .in the morning. (‘‘South Sea Bubble,’’ September 28, 1720; Makay1932, 69, originally published 1841)

Everybody begging for money—money—but money washardly on any condition to be had. (Thomas Joplin regarding thepanic of 1825, quoted in Kindleberger 1989, 127)

A crop of bank failures . . . led to widespread attempts to con-vert demand and time deposits into currency. . . . A contagion offear spread among depositors. (The first U.S. banking crisis of theGreat Depression, October 1930; Friedman and Schwartz 1963, 308)

The morning’s New York Times [August 27, 1998] intoned‘‘The market turmoil is being compared to the most painful finan-cial disasters in memory.’’ . . . Everyone wanted his money back.’’(After Russia’s effective default in August 1998; Lowenstein 2000,153–154)

Liquidity crises appear to be recurring episodes for our capitalist eco-nomic system. It may be the paradox of credit and banking, that bankingand finance are built on trust and confidence and yet such confidence can beoverstretched and when overstretched is apt to quickly disappear.

For an individual institution to protect against or manage such risk isdifficult. When ‘‘everyone wants his money back’’ the fine distinctions be-tween well-managed and poorly managed firms get blurred and all firms suf-fer.10 Liquidity risk is among the most difficult of all problems for managers.

10Nocera (2009) relates how Goldman cut back on exposure to mortgages in 2006and 2007, anticipating problems with the mortgage markets. And yet when the li-quidity crisis hit in late 2008 and early 2009, Goldman suffered along with otherbanks and investment banks. They protected themselves, and survived better thanothers, but were still caught in the turmoil.

512 QUANTITATIVE RISK MANAGEMENT

Page 532: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C12 02/15/2012 10:34:52 Page 513

12 .4 OPERAT I ONAL R I SK

Over the past few years there has been an explosion of research anddevelopment in operational risk measurement. To some extent, this hasbeen driven by regulatory demands. Basel II included a charge for oper-ational risk in calculating regulatory capital (see Basel Committee onBanking Supervision 2006 [originally published in 2004] and 2011[originally published 2003]). The industry has also recognized the bene-fits of better management of operational risk—many of the tradinglosses discussed in Chapter 4 were directly or indirectly related to oper-ational failures.

The mathematical sophistication of the field has grown substantially,aided by transfer of knowledge and techniques from the actuarial modelsapplied to nonlife insurance. We need to remember, however, that the endgoal is the management of risk. This is true for all areas of risk managementbut is particularly true for operational risk management. The mathematicalmodeling is important and there have been and there will be further stridesgoing forward, but the modeling is only part of the overall management ofoperational risk.

And there are indeed real business benefits to operational risk manage-ment. More than one author claims that ‘‘operational risk has no upside fora bank’’ (McNeil, Frey, and Embrechts 2005, 464) or that ‘‘operational riskcan only generate losses’’ (Jorion 2007, 497). This is not the case. To quoteBlunden and Thirlwell (2010):

Operational risk management is not just about avoiding lossesor reducing their effect. It is also about finding opportunities forbusiness benefit and continuous improvement. (p. 33)

A simple example should suffice to make the point that focus on opera-tions and operational risk management can have business benefits. Manyhedge funds execute interest rate swaps as part of a trading strategy, andtrading often starts with a small number of swaps (say, 1 to 10), traded in-frequently and held to maturity. With such a small number of swaps andinfrequent activity, the daily operational and settlement activities can bemanaged in a spreadsheet. A largely manual process can make sense fromboth a cost and an operational risk perspective: Costs can be controlled byavoiding investment in a costly new back-office system and risks can be con-trolled by suitable procedures and monitoring.

When the volume and frequency of trading increases, however, the op-erational tasks required in such a spreadsheet environment can be managedonly by throwing people at the problem—and these people need to be

Liquidity and Operational Risk 513

Page 533: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C12 02/15/2012 10:34:52 Page 514

skilled, careful, and responsible. In a spreadsheet environment, higher vol-umes involve increased operational risks. The alternative, an alternativethat reduces operational risks but can also reduce costs at the same time, isto automate the process. The automation can be tailored to the scale of theoperation, say, with an Access database and simple user interfaces to handlemoderate volumes or a large-scale commercial back-office system for highvolumes. Such automation not only reduces error rates but also allows trad-ing volumes to grow without adding costs, thus increasing profit potential.This results in both a better operational risk profile and lower costs, a clearbusiness benefit.

Operational risk management is important and growing. The topic iscovered in many books. McNeil, Frey, and Embrechts (2005, ch. 10) pro-vide a nice introduction to the technical modeling and probability theory.Jorion (2007) has a good general introduction. Chernobai, Rachev, andFabozzi (2007) is a book devoted to the mathematics and probability theoryfor modeling loss distributions. My favorite overall treatment, however, isBlunden and Thirlwell (2010). They focus less on the mathematical detailsof modeling loss events and more on the management of operational risk.They emphasize the necessity for an overall risk management frameworkand plan, with the buy-in of senior management. This is right; since thegoal of operational risk management is to manage the risks, it needs to bedriven by senior management.

The remainder of this section provides an overview of operational riskmanagement. This will be a high-level overview rather than a detailed ac-count for two reasons. First, operational risk is an area that is changing rap-idly and whatever I would write will likely be quickly outdated. Second,readers can turn to extensive treatments of the topic, some just mentioned,that have been published recently.

The approach I lay out for operational risk differs somewhat from thatapplied to market and credit risk. Here I focus more on the process of iden-tifying risks, on qualitative assessment, and on analyzing business processes,with less attention attention paid to the quantitative measurement andprobabilistic modeling.

The approach to operational risk can be summarized as falling into fourstages:

1. Define: Define operational risk.2. Identify and Assess: Identify and assess the sources and magnitude of

risks associated with particular lines of business and activities. Identifyrisk indicators that are associated with the sources of risks.

3. Measure and Model: Use operational risk events to quantify losses andmodel the distribution of such losses.

514 QUANTITATIVE RISK MANAGEMENT

Page 534: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C12 02/15/2012 10:34:52 Page 515

4. Manage and Mitigate: develop and implement plans to manage, con-trol, and mitigate the risks identified and measured in the first twostages.

The difference between operational and market or credit risk ismore apparent than real, however. For measuring risk, we are still con-cerned with the P&L distribution—what are the possible outcomes forprofit or loss?

But there are some aspects of operational risk that do set it apart. First,operational risk management is a newer discipline, and so it is natural thatwe need to focus relatively more effort on the first stages of defining andidentifying operational risk.

Second, relative to market or credit risk, measurable data on opera-tional risk causes and risk events are scarce, difficult to interpret, and heter-ogeneous. Operational risk events are internal to and specific to a company.Data are generally not reported publicly and even when they are, data forone company are often not relevant for another. (Consider incorrectly-booked trades, a classic operational risk. The frequency of such errors andthe severity of any resulting losses depend on a firm’s particular process,systems, and personnel.) As a result of the data paucity and heterogeneity,identifying and quantifying operational risks requires relatively more atten-tion than for market or credit risks.

Finally, there is one fundamental difference between operational andother risks that we need to highlight. Market and credit risk are the reasonfor doing business; operational risk is a coincidental result of doing busi-ness. Market and credit risk are a central aspect of the business. When aportfolio manager buys a bond or when a bank makes a loan, the market orcredit risk is actively solicited in the expectation of making a profit thatcompensates for assuming the risk. There may be many problems in meas-uring and managing it but the risk is front and center in deciding to under-take the business. Operational risk is different; it is an aftereffect, a result ofdoing business rather than the reason for the business. Nobody actively so-licits the risk of wrongly booking a trade—the necessity to book trades is aresult of doing the business and is not central in the way that price risk iscentral to the business of investing in a bond.

Operational risk is embedded in the business process rather than intrin-sic to the financial product. Operational risk may be an unavoidable conse-quence of trading a bond but it is not intrinsic to the bond; the operationalrisk depends on the details of how the business is organized. The details ofthe operational risk will differ from one firm to another, even for firms inthe same line of business. The details of the business must be examined toboth measure and manage the risk. This requires more attention to the

Liquidity and Operational Risk 515

Page 535: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C12 02/15/2012 10:34:52 Page 516

minutiae of the business processes, relative to market and credit risk whererisk analysis applies generally to all instruments of a particular class, inde-pendent of which firm owns the instruments.

S tage 1—Define Opera t i o na l R i s k

Definitions matter. By giving something a name we give it a reality; we canspeak about it with others. Until we decide what to include within the defi-nition of operational risk, we have no hope of measuring it, and little pros-pect of managing it effectively.

A few years ago, definitions of operational risk were rather narrow, re-stricted to risk arising from operations: transactions processing and settle-ment, back-office systems failures, and so on. These areas are, obviously,crucial, but such a definition is too restrictive. It would exclude, for exam-ple, fraud perpetrated by a trader.

Following consultation with industry, the Basel Committee on BankingSupervision (BCBS) promulgated the following definition:

Operational risk is defined as the risk of loss resulting from in-adequate or failed internal processes, people, and systems or fromexternal events. This definition includes legal risk, but excludesstrategic and reputational risk. (BCBS 2006, 144)

This definition was developed for commercial banks but it is a reasonabledefinition that could equally apply to virtually any organization.

This definition includes a wide variety of risks outside what we would usu-ally consider financial risks. Losses related to people would include a trader’sfraudulent trading, but also the loss of key personnel or breaches of employ-ment law. Such events might be quite far from the market risk of our U.S.Treasury bond, but a loss is a loss and when $5 million walks out the door, itreally doesn’t matter if it is due to a fall in a bond price or a legal settlement ona claim of unfair dismissal. In fact, the settlement on the employment claimmay be more irksome because it is not central to managing a portfolio—theprice risk of buying a bond is inherent in a financial business but a well-managed firm should be able to avoid or minimize employment contract risks.

This is a high-level definition but we need to move to specifics, to spe-cific risks. A good start is the categorization of losses that the BCBS provides(2006, annex 9). Table 12.6 shows the Level 1 categories for losses, catego-ries that provide an exhaustive list of losses that would fall under the pre-ceding definition.

These Level 1 loss event categories are still high level and the BCBS an-nex provides further granularity with Level 2 (event categories) and Level 3

516 QUANTITATIVE RISK MANAGEMENT

Page 536: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C12 02/15/2012 10:34:52 Page 517

(examples of activities associated with loss events). Table 12.7 shows Levels2 and 3 for internal fraud and employment practices—the full table can befound on the Web.

There is a very important point we need to highlight here: the distinctionbetween loss events and operational risks. Blunden and Thirlwell emphasizethe difference (2010, 15) and it is actually critical for managing risk.

The items categorized by the BCBS and shown in the two preceding ta-bles are loss events—incidents associated with financial loss. We obviouslycare about such events and they rightly take center place in most analysis ofoperational risk. But for managing the risk, for taking remedial action, it isthe cause of the event that we need to focus on. The cause is really the oper-ational risk, or at least the focus for managing operational risk.

TABLE 12.6 Basel Committee on Banking Supervisions (BCBS) Loss Event TypeCategorization (Level 1)

Event-Type Category

(Level 1) Definition

Internal fraud Losses due to acts of a type intended to defraud,misappropriate property or circumventregulations, the law, or company policy,excluding diversity or discrimination events thatinvolves at least one internal party

External fraud Losses due to acts of a type intended to defraud,misappropriate property, or circumvent the law,by a third party

Employment Practices andWorkplace Safety

Losses arising from acts inconsistent withemployment and health or safety laws oragreements from payment of personal injuryclaims or from diversity or discrimination events

Clients, Products, andBusiness Practices

Losses arising from an unintentional or negligentfailure to meet a professional obligation tospecific clients (including fiduciary and suitabilityrequirements), or from the nature or design of aproduct

Damage to Physical Assets Losses arising from loss or damage to physicalassets from natural disaster or other events

Business Disruption andSystem Failures

Losses arising from disruption of business or systemfailures

Execution, Delivery, andProcess Management

Losses from failed transaction processing or processmanagement, from relations with tradecounterparties and vendors

Source: BCBS (2006) Annex 9.

Liquidity and Operational Risk 517

Page 537: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C12 02/15/2012 10:34:52 Page 518

TABLE12

.7BaselCommitteeonBankingSupervisions(BCBS)Detailed

Categories

forTwoSelectedLevel1Categories

Event-Type

Category

(Level1)

Definition

Categories

(Level2)

ActivityExamples(Level3)

Internalfraud

Losses

dueto

actsofatypeintended

todefraud,misappropriate

property,orcircumvent

regulations,thelaw,orcompany

policy,excludingdiversity

or

discrim

inationeventsthatinvolve

atleastoneinternalparty

Unauthorizedactivity

Theftandfraud

Transactionsnotreported

(intentional)

Transactiontypeunauthorized(w

/monetary

loss)

Mismarkingofposition(intentional)

Fraud/creditfraud/w

orthless

deposits

Theft/extortion/embezzlem

ent/robbery

Misappropriationofassets

Maliciousdestructionofassets

Forgery

Checkkiting

Smuggling

Accounttakeover/impersonation/andso

on

Taxnoncompliance/evasion(w

illful)

Bribes/kickbacks

Insider

trading(notonfirm

’saccount)

Employment

Practices

and

Workplace

Safety

Losses

arisingfrom

actsinconsistent

withem

ployment,healthorsafety

lawsoragreem

ents,from

payment

ofpersonalinjury

claim

s,orfrom

diversity

ordiscrim

inationevents

Employee

Relations

Safe

Environment

Diversity

&Discrim

ination

Compensation,benefit,term

inationissues

Organized

laboractivity

Generalliability(slipandfall,andso

forth)

Employee

healthandsafety

rulesevents

Workers’compensation

Alldiscrim

inationtypes

Source:BCBS(2006)Annex

9.

518

Page 538: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C12 02/15/2012 10:34:53 Page 519

We want to think of operational risk or an operational event such asthose categorized here as:

Cause ! Event ! Ef f ect

We might best explain the difference between these by means of thefollowing example:

Event Trader fraudulently hides a mistake when executing and booking an OTCoption (option strike incorrectly booked). The mistake entails anunexpected (but moderate) loss on the option. The trader subsequentlytries to trade his way out of the loss.

Effect The original mistake plus losses on subsequent trading is several times thebudgeted annual profit of the trading desk.

Cause Two causes. First, poor user interface on options pricing screen makes iteasy to confuse entry of $16/32 and $0.16. Second, back-office bookingand reconciliation process and procedures fail to thoroughly check the dealas booked against counterparty confirms.

Focusing strictly on the loss event or the effect of the event (the monetaryloss) would miss the underlying source of the event—poor software designand inadequate back-office process and procedures for reconciliation of tradeconfirms. For managing and mitigating this risk, we need to go back toultimate causes. Concentrating on the fraudulent behavior of the trader isimportant but insufficient. Doing so could lead to misplaced or insufficientremedial action. A rule that traders must take mandatory holidays wouldhelp protect against fraud, but in this case, the real solution is to address theroot cause of poor software interface and inadequate back-office procedures.

This distinction between observed events and underlying operational riskcauses adds to the difficulty of measuring and managing operational risk. Lossevents are already difficult to measure in a comprehensive manner. The addeddifficulty of tracing events back to root causes adds another layer of difficulty.

S t age 2—Iden t i f y and Assess t heR i sks i n t he Bus i n ess

The goal here is to identify the sources of risk and prioritize them accordingto the impact they are likely to have on the business. This will involve at leastsome subjective and qualitative evaluation of the source as well as the impactof such risks. This information may be less precise than the objective andquantitative data to which we turn in the next section, but it is nonethelessvaluable, even critical. Operational risks are embedded in the business itself,part of how the business is run. Managing a business relies on successfully

Liquidity and Operational Risk 519

Page 539: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C12 02/15/2012 10:34:53 Page 520

using subjective and qualitative information, and so it is natural that in man-aging operational risk we should seek to exploit such information.

The specific risks of the business have to be identified, their impact as-sessed, and the data collected and catalogued. The word assess is commonlyused instead of measure to reflect that the impact of operational risk is hardto quantify and will usually be less precise than for market or credit risk.The information developed here may be subjective and qualitative but it canstill be collected, catalogued, and used. Organizing such information helps toidentify key risks and points the direction toward managing such risk.

There are a variety of ways to get such data but they all rely on develop-ing the information from within the business itself. This may take the formof interviews, questionnaires, or workshops. Whatever the form, there are afew broad considerations regarding the information being developed.

First, we want to separate the overall P&L impact of an operational riskinto two components: the probability or frequency of occurrence and the sizeor severity. These two components, the frequency and severity, will not alwaysbe independent, but they are conceptually distinct and so it is far easier to esti-mate and measure them separately. Combining these two variables gives us theoverall loss, and in the next section we examine howwe do this mathematically.

For the present purpose of identifying and assessing risk, the frequencyand severity might be represented in a minimal and qualitative manner. Forexample, frequency and severity might be estimated using a three-point score:

1. Low2. Medium3. High

The overall impact, the combination of frequency and severity, would thenbe the product, with a score from 1 to 9. This is a simplistic and subjectiveapproach (and in this case would give only an estimate of the expected im-pact and not any unexpected or tail effect) but it may be sufficient to startwith. The scheme might be extended to estimate average and awful cases.

A second consideration in identifying risks is that we will often want toexamine operational risks at different levels of the organization. At the toplevel, we will be concerned with strategic risks. These will be issues thathave an impact across business lines and potentially affect the business as awhole. They are related to high level goals and functions within the organi-zation. Examples of such strategic risks could be:

& Failure to attract and retain key staff.& Failure to understand and adhere to the relevant law and regulations.& Weakness in information security systems.& IT infrastructure that is inadequate to support business objectives.

520 QUANTITATIVE RISK MANAGEMENT

Page 540: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C12 02/15/2012 10:34:53 Page 521

At the middle level, there will be risks associated with processes andbusiness lines. As an example, consider the back-office or mid-office func-tion for a hedge fund, where daily or monthly net asset value is calculatedand reconciled with the fund administrator, and trades are transmitted toand reconciled with the prime broker. Examples of such process or businessline risks could be:

& Loss of key staff.& Failure to coordinate holidays among key staff (leading to lack of cov-

erage for crucial activities).& Lack of adequate data backup and offsite systems replication to allow

disaster recovery.& Staff turnover at administrator, leading to a drop in reliability of pro-

ducing the NAV.& Errors at administrator in collecting prices leading to incorrect NAV

reported to customers.

At the lowest, granular level there will be risks associated with specificbusiness activities. Continuing with the example of the back-office and mid-office function for a hedge fund, the end-of-day reconciliation of positionsversus prime broker holdings is a specific activity. Examples of activity risksfor this could be:

& Failure to transmit trades to prime broker in a timely manner.& Failure to properly allocate trades across multiple portfolios.& Interruption of telecommunications link with prime broker for auto-

mated transmission of futures tickers.& Late delivery of futures traded at other brokers to prime broker.& Trader forgets to enter ticket into system.

There are other important aspects to consider in identifying operationalrisks. For example, the risk owner, the person managing the business unit oractivity responsible for the risk, should be identified. Controls are usuallybuilt around operational risks. These controls are meant to eliminate or re-duce the frequency or severity of risk events. Such controls should also beidentified in the risk assessment because controls are a critical element inmanaging operational risk.

This discussion is only a brief overview of the issues. Blunden andThirlwell (2010, ch. 4) is devoted to risk and control assessment and delvesinto these issues in more detail. Before continuing, however, it may help tofix ideas if we examine the output from a simple hypothetical risk assess-ment exercise. Table 12.8 shows the risk assessment for the risks mentioned

Liquidity and Operational Risk 521

Page 541: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C12 02/15/2012 10:34:53 Page 522

earlier for the back-office or mid-office unit of a hedge fund. The highesttwo operational risks are failure to coordinate holidays among key staff,which leads to lack of coverage for critical functions, and errors at the ad-ministrator level that can lead to incorrect NAV being delivered to custom-ers. The owner of the risk and the controls implemented to reduce theserisks are also shown.

The final issue I mention here is the development of risk indicators.These are usually called key risk indicators (KRIs) but would be bettercalled indicators of key risks. The goal is to identify a set of measurableindicators that can tell us something about the current state of key risksand controls. For the risks shown in Table 12.8, an indicator of the risk oferrors at the administrator leading to incorrect NAV might be the time re-quired to reconcile the fund’s internal NAV against the administrator NAV.More errors by the administrator would usually mean that the internal rec-onciliation (undertaken by the hedge fund) would take longer. This wouldnot be a perfect indicator (there could be other sources for longer times) butit would be an indicator that attention should be directed at that area.

In closing the discussion of risk assessment, we should note the closeconnection between risk assessment as discussed here and the arenas of sixsigma and continuous product improvement. This is hardly surprising, since

TABLE 12.8 Sample Risk Assessment

Risks Owner Freq Sev Comb Controls

Failure to coordinateholidays among key staff

CT 3 2 6 Holiday calendar

Errors at administratorleading to incorrect NAV

RS 3 2 6 Weekly and monthlyreconciliation versusinternal NAV

Loss of key staff TC 1 3 3 Semiannual performancereview

Training programsLack of adequate backupand offsite systemsreplication

AR 1 3 3 Annual strategic review ofbusiness continuity plans

Monthly test of offsitesystems

Turnover at administrator TC 1 2 2 Semiannual review ofadministrator relationship

Notes: ‘‘Freq’’ is the estimated frequency of events, ‘‘Sev’’ is the estimated severity ordollar impact, both scored on a scale of 1 (low), 2 (average), and 3 (high). ‘‘Comb’’is the product of frequency and severity and estimates the expected overall dollarimpact (on a scale from 1 to 9).

522 QUANTITATIVE RISK MANAGEMENT

Page 542: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C12 02/15/2012 10:34:53 Page 523

operational risk is so closely connected with the running of the business.Financial firms are not manufacturing firms but many of the methods andideas developed for removing defects in manufacturing processes can none-theless be applied. We should also remember that the information devel-oped here complements and supplements rather than substitutes for thequantitative information of the next section.

S tage 3—Measure and Mode l L osses

We now turn to the quantitative measurement and modeling of risk eventsand losses. This is the area that has benefited from the attention of mathe-maticians and statisticians, and there have been considerable advances overthe past few years. As for all areas of risk management, however, we have toremember that the goal is managing risk, not mathematical rigor or sophis-ticated models per se. Blunden and Thirlwell (2010) state it well:

Much has been written about the mathematical modelling of opera-tional risk. Unfortunately, almost all of the writing has been verymathematical and with very little focus on the business benefits. Itis almost as though the modelling of operational risk should be suf-ficient in itself as an intellectual exercise. (p. 146)

Modeling is important—Blunden and Thirlwell go on to make clear thatthey are not arguing against modeling—but modeling must be in the serviceof an overall framework that harnesses such modeling to benefit thebusiness.

The goal here is to model the distribution of losses resulting from oper-ational risks. The approach is termed the actuarial approach or the loss-distribution approach (LDA). The loss we focus on is the total loss over aperiod of time, say, over a year. During the period, a random number ofevents may occur (zero, one, . . . ) and for each event the loss may be largeor small. The aggregate loss during the year results from combining the twocomponent random variables:

Loss f requency : N; the number of events during the year

Loss severity : Xk; the loss amount for event k

The aggregate loss for the year is the sum of the loss amounts, summingover a random number of events:

SN ¼X

N

k¼1

Xk ð12:7Þ

Liquidity and Operational Risk 523

Page 543: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C12 02/15/2012 10:34:53 Page 524

The random variable SN is called a compound sum (assuming the Xk allhave the same distribution, and the N and X are independent).11 For a typi-cal operational loss distribution, there will be a handful of events in a year.When an event does occur, it will most likely be a small or moderate lossbut there will be some probability of a large event. Figure 12.5 shows a

A. Frequency—Number of Events

B. Severity—Loss Given an Event

2 4 6

200,000 400,000 600,000

200,000 400,000 600,000

C. Loss Distribution—Losses in a Year

FIGURE 12.5 Hypothetical Operational RiskLoss Distribution

11This approach is called actuarial because much of the mathematical theory comesfrom the actuarial and insurance arena. See McNeil, Frey, and Embrechts (2005,section 10.2) for a discussion of the mathematical details.

AU: Add labels to bothaxes, Panels A-C?

524 QUANTITATIVE RISK MANAGEMENT

Page 544: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C12 02/15/2012 10:34:53 Page 525

hypothetical distribution. Panel A is the frequency or probability of eventsduring a year, with an average of two in a year and virtually no chance ofmore than seven. Panel B shows the severity or the probability of loss whenan event occurs—high probability of small loss, small probability of largeloss. Finally, Panel C shows the dollar losses during the year—the sum oflosses over a year or the compound sum in equation (12.7).

The important point from Figure 12.5 is that the distribution is substan-tially skewed with a very long tail. (Note that as is conventionally donewhen talking about operational loss distributions, the sign is changed andlosses are treated as positive numbers.) The long upper tail is one of thefundamental issues, and challenges, with modeling operational losses.There is a mass of high frequency low impact events, events that occur oftenbut with low losses, and a small number of low frequency high impactevents. These large losses are what cause significant damage to a firm, whatkeep managers awake at night. The large losses are particularly importantfor operational risk, but because they are so infrequent, they are particularlyhard to measure and model.

The mathematics for working with the compound sums of equation(12.7) can be complex. But that is not the major hurdle facing quantitativemodeling of operational risk. Data are the major issue. To quote McNeil,Frey, and Embrechts (2005):

The data situation for operational risk is much worse than thatfor credit risk, and is clearly an order of magnitude worse thanfor market risk, where vast quantities of data are publicly avail-able. (p. 468)

Building a model using distributions such as in Figure 12.5 is appealing butdaunting. A firm would have to collect data for many years, and even thenwould not have very many observations or even confidence that all eventshad been captured. Some public databases using pooled industry data arebecoming available but significant challenges remain.

Even with the challenges that exist, the discipline imposed by a quanti-tative approach can be valuable, from both challenging and enriching howwe think about the problem and from forcing us to confront real data.

Before turning to managing and mitigating operational risk, we need toreview the Basel II capital charges for operational risk. The capital chargesare important for two reasons. First in their own right because commercialbanks have to hold capital calculated in this manner. Second and equallyimportant, capital charges and the Basel II approach have spurred develop-ment of the field. The ideas behind the capital calculations provide a goodstarting point for data and modeling efforts.

Liquidity and Operational Risk 525

Page 545: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C12 02/15/2012 10:34:53 Page 526

Basel II provides for three tiered sets of calculations. The first two, calledthe basic-indicator (BI) and the standardized (S) approaches use gross incomeas an indicator of activity: ‘‘gross income is a broad indicator that serves as aproxy for the scale of business operations and thus the likely scale of opera-tional risk exposure within . . . business lines.’’ (BCBS 2006, par. 653). Thedifference between the basic-indicator and standardized approach is that thebasic-indicator approach uses gross income for the business as a whole whilethe standardized approach uses gross income by business line, as defined bythe BCBS (2006, annex 8) and shown in Table 12.9.

The basic-indicator approach uses gross income over three years (posi-tive values only) and sets capital equal to a percentage of income (15 per-cent). The standardized approach uses gross income in each of the businesslines shown in Table 12.9, with the factors shown applied to that businessline (and allowing some offset across business lines).

It is when we turn to the third, most sophisticated, approach that themodeling and data come to the fore. The advanced measurement approach(AMA) allows a bank to calculate capital according to its own internal riskmeasurement system. To qualify for the AMA, a bank must collect loss databy the eight business lines shown in Table 12.9, and within each businessline according to the loss event types shown in Table 12.6. The bank cannotuse AMA until they have collected five years of such data. There are addi-tional criteria, as detailed in BCBS (2006).

The main point, however, is that AMA points banks in a useful direc-tion, toward collecting and using loss data. By providing some standardiza-tion of the categories and criteria for collecting loss event data, the BCBShas provided a major impetus for development of operational risk model-ing. Just collecting data on and monitoring losses is often a major stepforward.

TABLE 12.9 Business Lines and Standardized Capital Factors—Basel II

Business Lines Beta Factors

Corporate finance (b1) 18%Trading and sales (b2) 18%Retail banking (b3) 12%Commercial banking (b4) 15%Payment and settlement (b5) 18%Agency services (b6) 15%Asset management (b7) 12%Retail brokerage (b8) 12%

Source: Basel Committee on Banking Supervision (2006, par. 654 andannex 8).

526 QUANTITATIVE RISK MANAGEMENT

Page 546: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C12 02/15/2012 10:34:53 Page 527

S tage 4—Manage and M i t i g a t e t he R i s ks

The final stage for operational risk management is to manage and mitigatethe risks. The earlier stages have provided the necessary background, bothqualitative and quantitative, for making informed strategic and tacticaldecisions.

With the sources of risks identified, and the size of actual and potentiallosses estimated or modeled, informed decisions can be made. Correctiveand preventive actions can be undertaken. These might take the form ofloss reduction (reducing the severity of losses when they occur); loss preven-tion (reduction in the frequency of occurrences); exposure avoidance (sim-ply avoiding the activity, an extreme form of loss prevention); or mitigation(insurance).

The link between good operational risk management and continuousprocess improvement and six-sigma ideas was highlighted earlier. In theend, it is competent managers and an appropriate corporate culture thatprovides the best protection against operational risk.

12 .5 CONCLUS I ON

Operational and liquidity risk are the poor cousins of market and creditrisk. Progress has been made, particularly in the arena of operational risk,but much more work needs to be done. Market and credit risk are moredeveloped partly because they are easier, higher profile, and more amenableto quantitative analysis, with data readily available. Losses from liquidityand operational events are just as painful, however.

There are additional risks that a firm will face. Strategic and reputa-tional risk is explicitly excluded from the BCBS definition of operationalrisk, but failures in these areas can be the most damaging to a firm in thelong run. Yet it might be right to exclude them, as they fall so entirely in therealm of traditional management, with quantitative and mathematical tech-niques having little to offer.

Liquidity and Operational Risk 527

Page 547: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C12 02/15/2012 10:34:53 Page 528

Page 548: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C13 01/28/2012 13:28:56 Page 529

CHAPTER 13Conclusion

W ith this book we have taken a tour through risk management in its maj-esty. We have covered much, but there is also much that we have not

covered. Risk, management, and financial markets are all evolving. That isgood but provides challenges for any manager who takes his responsibilitiesseriously.

In closing, I simply reiterate what I see as the central, in fact, the only,important principle of risk management: Risk management is managingrisk. This sounds simple but it is not. To properly manage risk, we need tounderstand and use all the tools covered in this book, and even then we willnot be able to foretell the future and will have to do the best we can in anuncertain world.

Risk management is the core activity of a financial firm. It is the art ofusing what we learn from the past to mitigate misfortune and exploit futureopportunities. It is about making the tactical and strategic decisions to con-trol risks where we can and to exploit those opportunities that can beexploited. It is about managing people and processes, about setting incen-tives and implementing good governance. Risk management is about muchmore than numbers. ‘‘It’s not the figures themselves, it’s what you do withthem that matters,’’ as Lamia Gurdleneck says.1

Risk measurement and quantitative tools are critical aids for supportingrisk management, but quantitative tools do not manage risk any more thanan auditor’s quarterly report manages the firm’s profitability. In the end,quantitative tools are as good or as poor as the judgment of the person whouses them. Many criticisms of quantitative measurement techniques resultfrom expecting too much from such tools. Quantitative tools are no substi-tute for judgment, wisdom, and knowledge. A poor manager with good riskreports is still a poor manager.

1 From The Undoing of Lamia Gurdleneck by K. A. C. Manderville, in Kendall andStuart (1979, frontispiece).

529

Page 549: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

C13 01/28/2012 13:28:56 Page 530

Managing a firm, indeed life itself, is often subject to luck. Luck is theirreducible chanciness of life. The question is not whether to take risks—that is inevitable and part of the human condition—but rather to appropri-ately manage luck and keep the odds on one’s side. The philosopher Rescherhas much good advice, and in closing, it is worth repeating hisrecommendations:

The bottom line is that while we cannot control luck through su-perstitious interventions, we can indeed influence luck through theless dramatic but infinitely more efficacious principles of prudence.In particular, three resources come to the fore here:

1. Risk management: Managing the direction of and the extent ofexposure to risk, and adjusting our risk-taking behavior in asensible way over the overcautious-to-heedless spectrum.

2. Damage control: Protecting ourselves against the ravages ofbad luck by prudential measures, such as insurance, ‘‘hedgingone’s bets,’’ and the like.

3. Opportunity capitalization: Avoiding excessive caution bypositioning oneself to take advantage of opportunities so as toenlarge the prospect of converting promising possibilities intoactual benefits. (2001, 187)

530 QUANTITATIVE RISK MANAGEMENT

Page 550: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

BABOUT02 03/06/2012 0:54:19 Page 531

About the Companion Web Site

Much of this book is technical and quantitative. We have provided supple-mentary materi al on an associat ed web site (www. wiley.c om/go/qr m) to aidin the use and understanding of the tools and techniques discussed in thetext. The material falls into two broad categories.

The first is a set of routines, written in MATLAB that implements the para-metric estimation of portfolio volatility, together with basic portfolio toolssuch as contribution to risk and best hedges. These routines demonstrate thepractical implementation of a risk measurement system. We assume that mar-ket history and portfolio sensitivities are supplied externally. The routinesthen calculate the portfolio volatility, volatility for various sub-portfolios,and best hedges and replicating portfolios. The objective is to provideroutines that demonstrate the ideas discussed in the text. We do not aim toprovide a working risk measurement system but instead to show how theideas in the book are translated into working code.

The second set of materials is appendixes that expand on ideas in indi-vidual chapters in the form of interactive digital documents. For example,Figure 8.4 in the text explains VaR by means of the P&L distribution for aUS Treasury bond. The digitally-enhanced appendix to Chapter 8 discussesthe volatility but makes the discussion interactive. Using Wolfram’s Com-putable Document Format the user can choose the VaR probability level,the instrument (bond, equity futures, etc.), the notional amount, and theassumed distribution (normal, Student-t, mixture of normals). The docu-ment dynamically computes the VaR and draws the P&L distribution, al-lowing the user to see how the VaR varies as assumptions or various aspectsof the portfolio change.

531

Page 551: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

BABOUT02 03/06/2012 0:54:19 Page 532

Page 552: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

BIBLIO 02/23/2012 11:55:5 Page 533

References

Abramowitz, Milton, and Irene A. Stegun. 1972. Handbook of Mathematical Func-tions. New York: Dover Publications.

Aczel, Amir D. 2004. Chance: A Guide to Gambling, Love, the Stock Market, &Just About Everything Else. New York: Thunder’s Mouth Press.

Adler, David. 2009. Snap Judgment. Upper Saddle River, NJ: FT Press.Alexander, Carol. 2001. Market Models: A Guide to Financial Data Analysis. New

York: John Wiley & Sons.Artzner, P., F. Delbaen, J. M. Eber, and D. Heath. 1999. ‘‘Coherent Measures of

Risk.’’Mathematical Finance 9: 203–228.Bailey, Jeffrey V., William F. Sharpe, and Gordon J. Alexander. 2000. Fundamentals

of Investments. 3rd ed. New York: Prentice Hall.Basel Committee on Banking Supervision. Undated. About the Basel Committee.

www.bis.org/bcbs.______. 2003. Sound Practices for the Management and Supervision of Operational

Risk. BIS. www.bis.org/publ/bcbs96.htm.______. 2004. Basel II: International Convergence of Capital Measurement and Cap-

ital Standards: a Revised Framework. BIS. www.bis.org/publ/bcbs107.htm.______. 2006. Basel II: International Convergence of Capital Measurement and Cap-

ital Standards: A Revised Framework—Comprehensive Version. BIS. www.bis.org/publ/bcbs128.htm.

______. 2011. Principles for the Sound Management and Supervision of OperationalRisk. BIS, June. www.bis.org/publ/bcbs195.htm.

Beirlant, Jan, Wim Schoutens, and Johan Segers. 2005. ‘‘Mandelbrot’s Extremism.’’Wilmott Magazine, March.

Bernstein, Peter L. 2007. Capital Ideas Evolving. Hoboken, NJ: John Wiley & Sons.Billingsley, Patrick. 1979. Probability and Measure. New York: John Wiley & Sons.Bingham, N. H., and R. Kiesel. 1998. Risk-Neutral Valuations. New York:

Springer.Blunden, Tony, and John Thirlwell. 2010. Mastering Operational Risk. Harlow,

UK: Pearson Education Ltd.Box, G. E. P., and G. M. Jenkins. 1970. Time Series Analysis: Forecasting and Con-

trol. San Francisco: Holden-Day.Brand, L., and R. Bahr. 2001. Ratings Performance 2000: Default, Transition, Re-

covery, and Spreads. Standard & Poor’s.Carty, L. V., and D. Lieberman. 1996. Defaulted Bank Loan Recoveries. Special

Report. Global Credit Research. Moody’s Investors Service.

533

Page 553: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

BIBLIO 02/23/2012 11:55:6 Page 534

Chernobai, Anna S., Svetlozar T. Rachev, and Frank J. Fabozzi. 2007. OperationalRisk. Hoboken, NJ: John Wiley & Sons.

Chernozhukov, Victor, Ivan Fernandez-Val, and Alfred Galichon. 2007. Rearrang-ing Edgeworth-Cornish-Fisher Expansions, September. www.mit.edu/ �vchern/papers/EdgeworthRearranged-posted.pdf.

Coleman, Thomas S. 1998a. Fitting Forward Rates to Market Data. January 27.http://ssrn.com/abstract ¼994870.

______ 1998b. A Practical Guide to Bonds and Swaps. February 20. http://ssrn.com/abstract ¼1554029.

______ 2007. Estimating the Correlation of Non-Contemporaneous Time-Series.December 13. http://ssrn.com/abstract ¼987119.

____ __ 20 09. A Prime r on Cr edit De fault Swaps (CDS ). Decem ber 29 . htt p: //s srn.com/abstract ¼1555118.

______ 2011a. A Guide to Duration, DV01, and Yield Curve Risk Transformations.January 15. http://ssrn.com/abstract ¼1733227.

______ 2011b. Probability, Expected Utility, and the Ellsberg Paradox. February26. http://ssrn.com/abstract ¼1770629.

Coleman, Thomas S., and Larry B. Siegel. 1999. ‘‘Compensating Fund Managersfor Risk-Adjusted Performance.’’ Journal of Alternative Investments 2(3):9–15.

Cram�er, Harald. 1974. Mathematical Methods of Statistics. Princeton, NJ: Prince-ton University Press. First published 1946.

Credit Suisse Financial Products. 1997. CreditRiskþ—A Credit Risk ManagementFramework. Credit Suisse Financial Products.

Crosbie, Peter, and Jeff Bohn. 2003. Modeling Default Risk. Moody’s KMV,December 18. www.moodyskmv.com.

Crouhy, Michel, Dan Galai, and Robert Mark. 2001. Risk Management. New York:McGraw-Hill.

______ 2006. Essentials of Risk Management. New York: McGraw-Hill.Drezner, Z. 1978. ‘‘Computation of the Bivariate Normal Integral.’’Mathematics of

Computation 32 (January): 277–79.Duffie, Darrel. 2001. Dynamic Asset Pricing Theory. 3rd ed. Princeton, NJ: Prince-

ton University Press.Duffie, Darrel, and Kenneth J. Singleton. 2003. Credit Risk: Pricing, Measurement,

and Management. Princeton Series in Finance. Princeton, NJ: Princeton Univer-sity Press.

Eatwell, John, Murray Milgate, and Peter Newman, eds. 1987. The New Palgrave:A Dictionary of Economics. London: Macmillan Press Ltd.

Ellsberg, Daniel. 1961. ‘‘Risk, Ambiquity, and the Savage Axioms.’’ The QuarterlyJournal of Economics 75 (4, November): 543–669.

Embrechts, Paul, Claudia Kl€uppelberg, and Thomas Mikosch. 2003. ModellingExtremal Events for Insurance and Finance. corrected 4th printing. Berlin:Springer Verlag.

Epstein, Larry G. 1999. ‘‘A Definition of Uncertainty Aversion.’’ Review ofEconomic Studies 66 (3, July): 579–608.

534 REFERENCES

Page 554: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

BIBLIO 02/23/2012 11:55:6 Page 535

Feller, William. 1968. An Introduction to Probability Theory and Its Applications,Volume I. Third Edition, Revised Printing. New York: John Wiley & Sons.

Felsted, Andrea, and Francesco Guerrera. 2008. ‘‘Inadequate Cover.’’ FinancialTimes, October 7.

Felsted, Andrea, Francesco Guerrera, and Joanna Chung. 2008. ‘‘AIG’s ComplexityBlamed for Fall.’’ Financial Times, October 7.

Friedman, Milton, and Anna Jacobson Schwartz. 1963. A Monetary History of theUnited States, 1857–1960. Princeton, NJ: Princeton University Press.

Frydl, Edward J. 1999. The Length and Cost of Banking Crises. InternationalMonetary Fund Working Paper. Washington DC: International MonetaryFund, March.

Gardner, Martin. 1959. ‘‘Mathematical Games.’’ Scientific American, October.Garman, M. B. 1996. ‘‘Improving on VaR.’’ Risk 9(5): 61–63.Gigerenzer, Gerd. 2002. Calculated Risks: Learning How to Know When Numbers

Deceive You. New York: Simon & Schuster.______ 2007. Gut Feelings: The Intelligence of the Unconscious. New York:

Penguin Group.Gladwell, Malcolm. 2005. Blink. New York: Little, Brown and Company.______ 2009. ‘‘Cocksure: Banks, Battles, and the Psychology of Overconfidence.’’

The New Yorker, July 27.Gordy, M. B. 2000. ‘‘A Comparative Anatomy of Credit Risk Models.’’ Journal of

Banking and Finance 24: 119–149.Hacking, I. 1990. The Taming of Chance. Cambridge, UK: Cambridge University

Press.Hacking, Ian. 2001. Probability and Inductive Logic. New York: Cambridge

University Press.Hacking, I. 2006. The Emergence of Probability. 2nd ed. Cambridge, UK: Cam-

bridge University Press.Hadar, J., and W. Russell. 1969. ‘‘Rules for Ordering Uncertain Prospects.’’

American Economic Review 59: 25–34.Hald, A. 1952. Statistical Theory with Engineering Applications. New York: John

Wiley & Sons.Hanoch, G., and H. Levy. 1969. ‘‘The Efficiency Analysis of Choices involving

Risk.’’ Review of Economic Studies 36: 335–346.Hoffman, Paul. 1998. The Man Who Loved Only Numbers: The Story of Paul

Erdos and the Search for Mathematical Truth. New York: Hyperion.Holm, Erik, and Margaret Popper. 2009. ‘‘AIG’s Liddy Says Greenberg Responsible

for Losses.’’ Bloomberg website, March 2.Hull, John C. 1993. Options, Futures, and Other Derivative Securities. 2nd ed.

Englewood Cliffs, NJ: Prentice-Hall.Isserlis, L. 1918. ‘‘On a Formula for the Product-Moment Coefficient of Any Order

of a Normal Frequency Distribution in Any Number of Variables.’’ Biometrika12: 134–139.

Jorion, Philippe. 2007. Value-at-Risk: The New Benchmark for Managing FinancialRisk. 3rd ed. New York: McGraw-Hill.

References 535

Page 555: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

BIBLIO 02/23/2012 11:55:6 Page 536

______ 2000. ‘‘Risk Management Lessons from Long-Term Capital Management.’’European Financial Management 6(3): 277–300.

Kahneman, Daniel, and Amos Tversky. 1973. ‘‘On the Psychology of Prediction.’’Psychological Review 80: 237–251.

Kahneman, Daniel, Paul Slovic, and Amos Tversky, eds. 1982. Judgment under Un-certainty: Heuristics and Biases. Cambridge, UK: Cambridge University Press.

Kaplan, Michael, and Ellen Kaplan. 2006. Chances Are . . . Adventures in Probabil-ity. New York: Viking Penguin.

Kendall, Maurice, and Alan Stuart. 1979. Advanced Theory of Statistics. Fourth.Vol. 2. 3 vols. New York: Macmillan.

Keynes, John Maynard. 1921. A Treatise on Probability. London: Macmillan.Kindleberger, Charles P. 1989. Manias, Panics, and Crashes: A History of Financial

Crises. Revised Edition. New York: Basic Books.Kmenta, Jan. 1971. Elements of Econometrics. New York: Macmillan.Knight, Frank. 1921. Risk, Uncertainty and Profit. Boston: Houghton Mifflin Co.Laeven, Luc, and Fabian Valencia. 2008. ‘‘Systemic Banking Crises: A New Data-

base.’’ IMFWorking Paper.Lakatos, Imre. 1976. Proofs and Refutations: The Logic of Mathematical Discov-

ery. Cambridge, UK: Cambridge University Press.Langer, Ellen. 1975. ‘‘The Illusion of Control.’’ Journal of Personality and Social

Psychology 32(2): 311–328.Langer, Ellen, and Jane Roth. 1975. ‘‘Heads I Win, Tails It’s Chance: The Illusion

of Control as a Function of Outcomes in a Purely Chance Task.’’ Journal ofPersonality and Social Psychology 32(6): 951–955.

LeRoy, Stephen F., and Larry D. Singell Jr. 1987. ‘‘Knight on Risk and Uncer-tainty.’’ Journal of Political Economy 95 (2, April): 394. doi:10.1086/261461

Litterman, R. 1996. ‘‘Hot SpotsTM and Hedges.’’ Journal of Portfolio Management(Special Issue) (December): 52–75.

Lleo, S�ebastien. 2008. Risk Management: A Review. London: CFA InstitutePublications.

Lowenstein, Roger. 2000. When Genius Failed: The Rise and Fall of Long-TermCapital Management. New York: Random House.

Mackay, Charles. 1932. Extraordinary Popular Delusions and the Madness ofCrowds. New York: Farrar Straus Giroux.

Mahajan, Sanjoy, Sterl Phinney, and Peter Goldreich. 2006. Order-of-MagnitudePhysics: Understanding the World with Dimensional Analysis, EducatedGuesswork, and White Lies. March 20. www.st anford.edu/class/ ee204/SanjoyMahajanIntro-01-1.pdf.

Markowitz, Harry M. 1959. Portfolio Selection. Malden, MA: Blackwell Publishers.______ 2006. ‘‘de Finetti Scoops Markowitz.’’ Journal of Investment Management 4

(3, Third Quarter). Online only, and password protected, at www.joim.com.Marrison, Chris. 2002. Fundamentals of Risk Measurement. New York: McGraw-

Hill.Maslin, Janet. 2006. ‘‘His Heart Belongs to (Adorable) iPod.’’ New York Times,

October 19.

536 REFERENCES

Page 556: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

BIBLIO 02/23/2012 11:55:6 Page 537

Mauboussin, Michael, and Kristin Bartholdson. 2003. ‘‘On Streaks: Perception,Probability, and Skill.’’ Consilient Observer (Credit Suisse-First Boston),April 22.

McCullagh, P., and J. A. Nelder. 1989.Generalized Linear Models. 2nd ed. London:Chapman & Hall.

McNeil, Alexander, Rudiger Frey, and Paul Embrechts. 2005. Quantitative RiskManagement. Princeton, NJ: Princeton University Press.

Merton, Robert C. 1974. ‘‘On the Pricing of Corporate Debt: The Risk Structure ofInterest Rates.’’ Journal of Finance 29 (2, May): 449–470.

Mirrlees, J. 1974. Notes on welfare economics, information, and uncertainty. InContributions to Economic Analysis, ed. M. S. Balch, Daniel L. McFadden,and S. Y. Wu. Amsterdam: North Holland.

______ 1976. ‘‘The Optimal Structure of Incentives and Authority within an Organi-zation.’’ Bell Journal of Economics 7(1): 105–131.

Mlodinow, Leonard. 2008. The Drunkard’s Walk: How Randomness Rules OurLives. New York: Pantheon Books.

Ne w Sc ho ol . R is ki ne ss . ht tp: // ho m epa ge . new s c ho ol .e du /h e t// e s say s /u nc er t/i ncr ea s e.htm.

Nocera, Joe. 2009. ‘‘Risk Mismanagement.’’ New York Times, January 4,Magazine sec. www.nytimes.com/2009/01/04/magazine/04risk-t.html?_r¼1&ref¼bu si ne ss .

Press, William H., Saul A. Teukolsky, William T. Vetterling, and Brian P. Flannery.2007.Numerical Recipes, 3rd ed. New York: Cambridge University Press.

Reinhart, Carmen M., and Kenneth S. Rogoff. 2009. This Time Is Different: EightCenturies of Financial Folly. Princeton, NJ: Princeton University Press.

Rescher, Nicholas. 2001. Luck: The Brilliant Randomness of Everyday Life. NewYork: Farrar Straus Giroux.

RiskMetric Group, Greg M., Greg M. Gupton, and Christopher C. Finger. 1997.CreditMetrics—Technical Document. RiskMetrics Group. www.riskmetrics.com/publications/techdocs/cmtdovv.html.

Rosenhouse, Jason. 2009. The Monty Hall Problem: The Remarkable Story ofMath’s Most Contentious Brainteaser. New York: Oxford University Press.

Ross, Stephen. 1973. ‘‘The Economic Theory of Agency: The Principal’s Problem.’’American Economic Review 63 (2, May): 134–139.

Rothschild, M., and J. E. Stiglitz. 1970. ‘‘Increasing Risk I: A definition.’’ Journal ofEconomic Theory 2(3): 225–243.

____________. 1971. ‘‘Increasing Risk II: Its economic consequences.’’ Journal ofEconomic Theory 3(1): 66–84.

Schmeidler, David. 1989. ‘‘Subjective Probability and Expected Utility WithoutAdditivity.’’ Econometrica 57 (3, May): 571–587.

Selvin, S. 1975a. ‘‘On the Monty Hall Problem.’’ American Statistician 29: 134.______. 1975b. ‘‘A Problem in Probability.’’ American Statistician 29: 67.Shaw, W. T., and K. T. A. Lee. 2007. Copula Methods vs. Canonical Multivariate

Distributions: The Multivariate Student T Distribution with General Degreesof Freedom. Kings College, London, April 24.

References 537

Page 557: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

BIBLIO 02/23/2012 11:55:6 Page 538

Stiglitz, J. E. 1974. ‘‘Incentives and Risk Sharing in Sharecropping.’’ Review ofEconomic Studies 41 (April): 219–255.

Stiglitz, J.E.. 1975. ‘‘Incentives, Risk, and Information: Notes Toward a Theory ofHierarchy.’’ Bell Journal of Economics 6(2): 552–579.

Taleb, Nassim. 2004. Fooled by Randomness. New York: Random House.______. 2007. The Black Swan: The Impact of the Highly Improbable. New York:

Random House.The Economist. 2008. ‘‘AIG’s Rescue: Size Matters.’’ The Economist, September 18.

www.economist.com/finance/displaystory.cfm?story_id¼1 2 27 40 7 0 .Tremper, Bruce. 2008. Staying Alive in Avalanche Terrain. 2nd ed. Seattle WA: The

Mountaineers Books.Tversky, Amos, and Daniel Kahneman. 1974. ‘‘Judgment under Uncertainty:

Heuristics and Biases.’’ Science 185(4157): 1124–1131.____________. 1983. ‘‘Extensional versus Intuitive Reasoning: The Conjunction

Fallacy in Probability Judgment.’’ Psychological Review 90 (4, October):293–315.

Valencia, Mathew. 2010. ‘‘The Gods Strike Back.’’ Economist, February 11.Varian, Hal R. 1978.Microeconomic Analysis. W. W. Norton & Company.vos Savant, Marilyn. 1990a. ‘‘AskMarilyn.’’ Parade, September 9.______. 1990b. ‘‘AskMarilyn.’’ Parade, December 2.______. 1996. The Power of Logical Thinking. New York: St. Martin’s Press.Wechsberg, Joseph. 1967. The Merchant Bankers. London: Weidenfeld and

Nicolson.WillmerHale. 2008a. Rogue Traders: Lies, Losses, and Lessons Learned. Wilmer

Hal e , March. www.wilmerhal e .com/files/Publicat i on /738ab57 a -ba44-4ab e -9c3e-24ec62064e8d/Presentation/PublicationAttachment/a5a7fbb0-e16e-4271-9d75-2a68f7db0a3a/Rogue%20Trader%20Article%20FINAL%20for%20Alert.pdf.

Young, Brendon, and Rodney Coleman. 2009. Operational Risk Assessment.Chichester, UK: John Wiley & Sons.

538 REFERENCES

Page 558: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

BABOUT01 02/07/2012 11:18:31 Page 539

About the Author

THOMAS S. COLEMAN has worked in the finance industry for more than 20years and has considerable experience in trading, risk management, andquantitative modeling. Mr. Coleman currently manages a risk advisory con-sulting firm. His previous positions have been head of quantitative analysisand risk control at Moore Capital Management, LLC (a large multi-assethedge fund manager), and a director and founding member of AequilibriumInvestments Ltd., a London-based hedge fund manager. Mr. Colemanworked on the sell side for a number of years, with roles in fixed-incomederivatives research and trading at TMG Financial Products, LehmanBrothers, and S. G. Warburg in London.

Before entering the financial industry, Mr. Coleman was an academic,teaching graduate and undergraduate economics and finance at the StateUniversity of New York at Stony Brook, and more recently he has taught asan adjunct faculty member at Fordham University Graduate School of Busi-ness Administration and Rensselaer Polytechnic Institute. Mr. Colemanearned his PhD in economics from the University of Chicago and his BA inphysics from Harvard. He is the author, together with Roger Ibbotson andLarry Fisher, of Historical U.S. Treasury Yield Curves and continues topublish in various journals.

539

Page 559: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

BABOUT01 02/07/2012 11:18:31 Page 540

Page 560: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

BINDEX 02/23/2012 15:9:14 Page 541

Index

Activity-related operational risks,521

Actuarial approachoperational loss measurement,

523–526and risk measurement, 382–383risk-neutral approach vs.,

463–464and risk pricing, 459–460

Aczel, Amir D., 25, 42, 46, 53Advanced measurement approach

(AMA), capital charges, 526Aggregating risk, summary

measures, 285–290AIB/Allfirst Financial trading loss

(2002), 103–104, 108–109,118, 123

AIG Financial Products (FP) tradingloss (2008), 13, 82–84

All-or-nothing contribution to risk,162–163, 317–318, 327

Amaranth Advisors trading loss(2006), 104, 107, 110–111,113, 127, 130–131

Ambiguity aversion. Seeuncertainty/randomness

American Alpine Club, 102Aracruz Celulose trading loss

(2008), 104, 110–111, 115,128, 130

ARCH (autoregressive conditionallyheteroscedastic) model, 251

Ars Conjectandi (Bernoulli), 42

Askin Capital Management tradingloss (1994), 104, 110–111,117–118, 127

Asset liquidity riskcosts/benefits of liquidation,

484–487defined, 182–183, 481evaluating, 483, 492–496

Asset-to-risk factor mappingconceptual models, 210–211FX example, 214for single bond position, 271

Asymmetric (skewed) distributionand credit risk modeling, 182,

394–403credit vs.market risk, 380–382and volatility, 190–191

Asymmetric informationhistorical vs. future data, 36–37principal-agent issues, 69

Autoregressive conditionallyheteroscedastic (ARCH)model, 251

Avalanche response, as riskmanagement model, 93–96,101–102

Back-office procedureseffective risk management,

513–514, 516, 519, 521–522as source of operational risk, 112,

116, 118, 125–126, 129,131–132

541

Page 561: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

BINDEX 02/23/2012 15:9:14 Page 542

Bagehot, Walter, 496–497Bank for International Settlements

(BIS), 91Bankhaus Herstatt trading loss

(1974), 104, 106–107,110–111, 117, 127–129, 181

Bank of Montreal trading loss(2007), 105, 120, 123

Bankscommercial, regulations

governing, 90–91corporate structure and risk

management, 85defining assets, capital holdings,

91–92and funding liquidity risk,

496–497measuring and managing

liquidity, 498–504operational risk, 513, 516–519

Barings Bank failure (1995), 12–13,72, 92, 103–104, 108–109,112, 115, 123, 131

Barings Brothers failure (1890), 465Basel Committee on Banking

Supervision. See BCBSBasel II/Basel III rules, 92, 525–526Basic-indicator (BI) approach,

capital charge calculation,526

Basis point value (BPV), 8BAWAG trading loss (2000), 104,

108–109, 116, 124–125, 128Bayes, Thomas, 52Bayes’ rule/Theorem, 48–51,

53–58BCBS (Basel Committee on Banking

Supervision), 90–91, 183–184, 226–229, 513, 516–517

Bear Stearns (2008) failure, 63–64Beliefs/belief inertia, 98Belief-type probability

Bayes’ rule, 48–51de Finetti game, 45–46with frequency-type probability,

52–53, 58logical probability, 46–47

Berger, Michael, 120Berkeley, George, 15Bernoulli, Jakob, 41–42Bernoulli mixture models

applications, 401–403, 443–446parameters, 451–454Poisson mixture comparison, 449using Poisson variable in, 430–431

Bernoulli Probit-Normal Mixture,457

Bernoulli’s Theorem, 42Best hedge position calculations,

164–167, 327–335,354–355, 364

Best practices, 85–86, 131Beta-equivalent notational, 8Binning for fixed-income

instruments, 215–216Binomial distribution

as analytic approach, 456Bernoulli trials, 41–42, 478for defaults, 386–387, 394negative, 433, 436, 448

BIS (Bank for InternationalSettlements), 91

Blink (Gladwell), 38Blunden, Tony, 513–514, 517, 521,

523Board of directors, role in risk

management, 3, 67, 85–86,89

Bondsasset/risk factor mapping,

210–212, 271comparing multiple assets, 246corporate, default example,

196–199

542 INDEX

Page 562: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

BINDEX 02/23/2012 15:9:14 Page 543

and credit risk modeling, 378,404, 409, 417, 463, 465,472, 475

DV01/BPV, 8, 154–156, 271floating-rate, and CDS behavior,

80liquidation costs, 503–504, 506portfolio analysis involving, 333,

336–337, 351, 355, 360and rate swaps, 74–83risks associated with, 95, 179,

181–182risky, pricing model for, 470–472and share value, 71tail events associated with,

104–105, 114, 116, 119,121–122, 128

volatility contributions/comparisons, 159, 166–167

BPV (basis point value), 8Brazil currency markets, and trading

loss events, 130Breast cancer risk calculations,

49–51

CAC index futuresbest hedge positions/replicating

portfolios, 164–166beta-equivalent position, 8estimating volatility of, 156–157liquidity of, 488marginal contribution

calculations, 161–162and normal distribution, 145in parametric estimates, 216–217portfolio analysis example, 269,

313–316, 324–334, 348volatility contributions/

comparisons, 166–167, 285,294–296, 303

Calyon trading loss (2007), 105,110–111, 121, 127–128, 129

Capital asset pricing model, 214Capital charges (Basel II),

calculating, 525–526Capital holdings, 91–92Cash flow. See also Funding

liquidity riskcash-flow mapping, 215–216credit default swaps, 80–82, 214,

466–468future, valuing, 461and interest rate swaps, 74–75and liquidity risk, 182and market/cash distribution,

510–511risky bonds, 214, 470–472

Cash flow mapping, 215Cayne, Jimmy, 63–64CDSs (credit default swaps)and AIG Financial Products

failure, 82–84applying market pricing to,

471–472behavior of and risk calculations,

79–84equivalence to floating rate note,

466–467pricing model, 467–470

Central limit theorem, 44, 327Central tendency, 188CEO (chief executive officer), risk

management responsibilities,3, 10, 67, 73, 86–87, 89

China Aviation Oil (Singapore)trading loss (2004), 105,108–109, 119, 124, 128, 130

CITIC Pacific trading loss (2008),104, 116, 124, 128

Citron, Robert, 114Closing-time problem, 218–219Codelco trading loss (1993), 105,

107–109, 121, 124Cognitive biases, 22

Index 543

Page 563: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

BINDEX 02/23/2012 15:9:14 Page 544

Coleman, Thomas S., 71Collateral calls, 182, 509–511Commodity price risk, 179–180Common factor structure, 400Communications. See Risk

communication/reportingCompensation and incentives,

68–71, 478, 518Constructivist (actuarial) approach,

382–383Contribution to risk tools, 161–163Convolution, 201Copulas, 241–243, 299–304Corporate structure, 84–87,

125–127. See also Board ofdirectors; CEO (chiefexecutive officer)

Correlationsassets within a portfolio, 326–327correlation matrix estimates, 251credit risk modeling, 394–403daily, and portfolio risk estimates,

218–219and diversification, 404–407and joint default probability,

388–389over time, 246–248and risk reduction potential,

314–317Counterparty risk, 181, 379Covariance, 217–219, 242, 249,

266, 331. See also Variance-covariance distribution

Cramer, Harald, 188–189Credit analysis, 473–477Credit default correlations,

405–406Credit default swaps. See CDS

(credit default swaps)CreditMetrics, 421–429Credit migration, 478Credit risk

data inputs, 379, 383, 390–391defined, 180–181, 377legal issues, 383limits, implementing, 89market risk vs., 379–383operational risk vs., 515and P&L distribution estimates,

377–378and risk-weighted assets, 92varieties of, 181–182, 378–379

CreditRisk+ modelassumptions, 434–435conditional independence across

firms, 431–434CreditMetrics model comparison,

458credit risk pricing vs., 409–410intensity volatility and default

correlation, 437–441loss distribution, 441–443overview, 429–430parameters, 454Poisson process, Poisson mixture,

and negative binomial defaultdistribution, 430–432,435–436

specific factor, 441–442static vs. dynamic models, 410–411

Credit risk modelingBernoulli vs. Poisson mixture

models, 451–456equivalent Martingale/

risk-neutral pricing, 461–463reduced form approach, 429–443risk pricing approach, 459–460,

463–464static/structural approach, 409,

411–429, 443–448, 450stylized approaches, 383–386,

388–409taxonomy, overview, 410–412technical challenges, 390

544 INDEX

Page 564: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

BINDEX 02/23/2012 15:9:14 Page 545

Credit structures, types of, 464–477Credit Suisse Financial Products,

429, 432, 437–438, 441Crisis funding requirements,

502–503CRO (chief risk officer), 86Cross-currency settlement risk, 107Crouhy, Michel, 85–88, 90,

179–180, 204, 261, 327

Daily volatility, 9Daiwa Bank trading loss (1995), 104,

108–109, 116, 123–124, 131Damage control, 6, 93–95, 99, 530Datafor asset liquidity risk estimates,

492for bank funding liquidity risk

estimates, 499–500for credit risk estimates, 379, 383,

390–391, 416, 454historical vs. future, 36–37,

205–206internal vs. external, 176–177and IT infrastructure, 72–73,

176–177for operational risk estimates,

515Default probability, 414–421, 430,

432De Finetti, Bruno/ De Finetti game,

45–46, 48Delta normal distribution. See

Parametric approachDependenceacross defaults, 425, 428across firms, 419–421, 428–431,

436, 445copulas, 241–243, 300, 303credit risk distributions, 278, 281,

386, 388–391credit risk modeling, 394–403

multivariate analyses, 296tail dependence, 245, 248,

305–306Derivatives, secondand funding liquidity risk,

505–508parametric estimation using,

307–310Desk-level traders, view of risk, 7–8Dexia Bank trading loss (2001),

105, 110–111, 121Dimensionality, 251Disasters, financial. See Financial

risk events; Tail (extreme)events

Dispersion. See ScaleDispersion/density functions,

16–20, 189–191Diversification, 196, 403–407Dollar duration, 8Dow Jones Industrial Average,

227–229, 235–237Dynamic reduced form risk pricing,

461–464

Econometrics, 251Economic capitaland credit risk modeling,

377–378, 393, 459–460, 477crisis funding, 502–504

Elliptical distributions, 201Ellsberg, Daniel/Ellsberg paradox,

58–62Embedded options, 70–71Embrechts, Paul, 183, 196, 226,

233, 237–239, 241–243,300, 365–366, 368, 403,405–406, 525

Employer-employee relations,68–69

Equity price risk, 179Equity traders, 8

Index 545

Page 565: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

BINDEX 02/23/2012 15:9:14 Page 546

Equivalent Martingale/risk-neutralapproach to risk pricing,461–463

Erdos, Paul, 35ES (expected shortfall), 199–200Exponential weighting, 250–251Exposure, measuring, 8, 388–389Extreme events. See Tail (extreme)

eventsExtreme value theory (EVT),

237–241, 245, 296–299

Factors, factor loadings, principalcomponent analysis,342–344, 346, 400

Failure to segregate, as cause oftrading loss event, 131

Familiarity and effectiveness, 97–98Fannie Mae/Freddie Mac, 133Fat tails, 246–248Feller, William, 15, 29–30Finance unit (risk management

group), 89Financial times series, 245–2485%/95% VaR, 198Fixed-income traders, 7–8Foreign exchange. See FX (foreign

exchange) speculationFranklin, Benjamin, 46, 206Fraudulent trading

and financial loss events, 107–112fraud without, 124and operational risk

management, 516–519preventing, 125–127tangential fraud, 128types of fraud, 123–125

Frechet-class distribution, 238–240Frequency-type probability, 43–45,

47, 52–53, 58Frey, Rudiger, 183, 196, 226, 233,

237–239, 241–243, 300,

365–366, 368, 403, 405–406,525

FRN (floating-rate notes), 80–82,466–467

Front-back office separation, andtrading loss events, 125, 129,131

Funding liquidity riskdefined, 182–183, 481–483and derivatives, 505–508leveraged instruments, 505–507market-to-market payments and

market/cash volatility, 509Metallgesellschaft trading loss

(1993), 511–512risk management using, 496,

498–504FX (foreign exchange) speculation

as cause of trading losses, 107,128–130, 179

forward contracts, risksassociated with, 207–208

risk estimates, valuation model,211–212

Galai, Dan, 85–88, 90, 179–180,204, 261, 327

Gamma random variable, 479GARCH (generalized autoregressive

conditionallyheteroscedastic) model,251

Gardner, Martin, 32Garman, M. B., 160, 312, 318General Electric, 106Generalized linear mixed credit risk

models, 448Generalized pareto distribution.

See GPDGEV (generalized extreme value)

distribution, 237–240,296–299

546 INDEX

Page 566: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

BINDEX 02/23/2012 15:9:14 Page 547

Gigerenzer, Gerd, 21, 24, 39, 49,51, 58

Gladwell, Malcolm, 38, 63–64Global financial crisis, 92Goldman Sachs, 103, 151, 354, 512Gordy, M. B., 458GPD (generalized pareto

distribution), 237, 240–241,296–300

Groupe Caisse d’Epargne tradingloss (2008), 104, 110–111,118, 127–129

Gumbel-class distribution, 238–240

Hacking, Ian, 48, 52–53Haldane, Andrew, 101Hedge fundsloss events, 76, 103, 113–114,

116, 118, 120, 125, 127–128operational risk, 513, 521–522performance fees, 71

Herstatt. See Bankhaus HerstattHeuristics (rules of thumb), 22,

151–152High-water mark, 71Historical approachasset to risk factor mapping,

271–272modeling, 221–223P&L distribution, 274–276parametric and Monte Carlo

approaches vs., 224–225summary, 217–218volatility and VaR, 223–224,

278–281Hot Spots and Hedges (Litterman),

160, 312, 318Human factor, 96–97, 99Hyperinflation, 132Hypo Group Alpe Adria trading loss

(2004), 105, 110–111, 120,124

Idiosyncratic risksystem risk vs., 12–13, 102trading loss events, 1974-2008,

103–122Iguchi, Toshihide, 124Incentive schemes, 70Incremental VaR. See All-or-

nothing contribution to riskInfinitesimal contribution to risk.

SeeMarginal contribution torisk

Inflation, 106–107, 132Innumeracy, statistical, overcoming,

39Interest rate risk, 179Intuition, humanand probability, 22–26, 29–30,

37–38role in risk management, 68

IRSs (interest rate swaps), 74–79IT (information technology)

infrastructure needs, 72–73,177

Japan, banking crises, 91, 134Jett, Joseph, 122Jobs, Steve, 25–26Jorion, Philippe, 72, 79, 154,

183–184, 191, 204, 221,226, 241–243, 318, 327, 482

JPMorgan, 203, 421

Kahneman, Daniel, 22–24Kashima Oil Co. trading loss (1994),

104, 108–109, 114–115, 124,128, 130

Kealhofer, Stephen, 416–417Kerviel, Jerome, 113, 124Keynes, John Maynard, 47–48, 60Kidder, Peabody & Co. trading loss

(1994), 105–106, 108–109,122–123

Index 547

Page 567: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

BINDEX 02/23/2012 15:9:14 Page 548

Kindleberger, Charles P., 132Kluppelberg, Claudia, 226Kmenta, Jan, 230Knight, Frank, 47, 59Kolmogorov, A. N., 47KRIs (key risk indicators),

522–523

Langer, Ellen, 63Laplace, Pierre-Simon, 52Law of large numbers, 42, 44–45,

48, 64, 237, 444Lee, David, 120Lee, K. T. A., 305Leeson, Nick, 112, 115, 131Legg Mason Value Trust

performance, 53–58Legitimate practices, trading losses

from, 127–129Lehman Brothers’ trading loss

(2008), 13, 379, 390LeRoy, Stephen F., 47Let’s Make a Deal (TV show),

30–36Leveraged instruments. See also

CDS (credit default swaps);Hedge funds

defined, 82and liquidity risk, 182, 497–498,

505–507speculation in, 111–114, 128, 130

LGD (loss given default), 388–389,442, 447

Limits, implementing, 89–90‘‘Linda the Bank Teller’’ example,

22–24Linear mixed models, generalized,

448–450Line managers, 3, 5, 67Liquidating assets, costs, 483–487Liquidity risk

asset liquidity risk, 484–496

asset vs. funding liquidity, 481credit vs.market risk, 380funding liquidity risk, 496–512and systemic failures, 512

Litterman, Robert, 151–152,157–160, 204, 282,311–312, 316, 318

Lleo, Sebastien, 72, 183Local-valuation method, 221Location, in distribution

measurements, 20, 188Logical probability, 46–47Lombard Street (Bagehot), 496–497London Interbank Offered Rate

(LIBOR), 80–82Loss-distribution measurements for

operational loss, 523–526Losses, anticipating, 42, 135. See

also P&L (profit and loss)Loss event categories, 516–519Lowenstein, Roger, 58, 76, 78–79LTCM (Long-Term Capital

Management) fund collapse(1998), 76–78, 110–111,113, 127–128, 130–131

Luck, 6, 25–28, 64, 530Luck (Rescher), 6

Mackay, Charles, 132Macroeconomic financial crises. See

Systemic riskManagers

collaborations tandem with riskprofessionals, 138

contribution to trading lossevents, 131

incentivizing, 68–70overconfidence, 63responding to shareholders/

owners, 68–69risk understanding, importance,

73, 137–138

548 INDEX

Page 568: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

BINDEX 02/23/2012 15:9:14 Page 549

training to use measurement tool,67–68

Manhattan Investment Fund tradingloss (2000), 105, 110–111,120, 124, 128

Manias, Panics, and Crashes: AHistory of Financial Crises(Kindleberger), 132

Marginal contribution to riskcalculating, 160–162, 318–327,

365–368definitions and terms used for,

317–318reporting, 353–354subportfolios, partitioning

approach, 361–362volatility estimates

multiple-asset best hedgeposition, 364–365

simple portfolio, 329–333single-asset best hedge position,

363single-asset zero position,

362–363Margin calls, 182–183Mark, Robert, 85–88, 90, 179–180,

204, 261, 327Market/cash distribution, 509–510Market riskcredit risk vs., 379–383defined, 178–179estimating

historical approach, 217–218Monte Carlo approach,

218parametric approach, 216–217

limits associated with,implementing, 89

modeling, 219–223operational risk vs., 515

and P&L, 207–208, 270–284reporting

sample portfolio, 347–353subportfolios, 355–360

risk categories, 179–180risk factor distribution estimates,

244–251and risk-weighted assets, 92terminology for, 7

Market-to-market payments, 509Markowitz framework, 18–19Marrison, Charles (Chris), 179,

226, 319, 324, 368, 392,402–504

McNeil, Alexander, 183, 196, 226,233, 237–239, 241–243,300, 365–366, 368, 403,405–406, 525

Mean-variance Markowitzframework, 18–19

Merrill Lynch trading loss (1987),104, 110–111, 119,127–129, 131

Merton, Robert C., 71, 410–416Meta distributions, 300Metallgesellschaft trading loss

(1993), 104, 110–111, 115,127–129, 511–512

MF Global Holdings trading loss(2008), 105, 110–111,122

Migration modeling, for credit riskestimates, 421–429

Mikosch, Thomas, 226Miller, Bill. See also Legg Mason

Value Trust Fund, 26–28, 53Mirror portfolios. See Replicating

portfoliosMixture of distributionsfor credit risk modeling,

401–403two-position example, 303–304

Mixture of normals assumption,291–296

Index 549

Page 569: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

BINDEX 02/23/2012 15:9:14 Page 550

MKMV (Moody’s KMV) credit riskmodel

CreditMetrics model vs., 429data sources, 416default probability function,

418–419factor structure and dependence

across firms, 419–421implementing, 416–417unobservable assets, 417–418

Mlodinow, Leonard, 24, 36–39Monte Carlo approach to risk

estimationasset-to-risk-factor mapping,

271–272copula/multivariate approach,

300–306marginal contribution to risk

calculations, 366–368overview, 217–218P&L distribution, 276–278parametric and historical

approaches vs., 224–225volatility and VaR calculations,

206, 223–224, 278–281, 324Monty Hall problem, 30–36Morgan Grenfell trading loss

(1997), 104, 108–109, 118,124–125, 128

Mortgage bonds. See also Tail(extreme) events

and credit risk modeling, 378,475

liquidation costs, 506repo market for, 506subprime, 83, 181–182, 465,

512Multiple asset portfolios. See also

Covarianceanalytic challenges, 10analyzing tail events, parametric

assumptions, 294–296

calculating marginal contributionto risk, 364–365

mixture of normals approach,294

replicating portfolios for, 165,167, 335–337

Multiple-issuer credit risk, 181,378–379

Multivariate distributions, 231,241–243, 305. See alsoCopulas

National Australia Bank tradingloss (2004), 105, 108–109,121, 124

Natural frequencies, 49, 51NatWest Markets trading loss

(1994), 105, 110–111, 121,124, 128

Negative binomial distribution,479–480

Newton, Isaac, 98New York Times, 60Nocera, Joe, 103, 512Non-normal multivariate

distributions, 241–243Nonsymmetrical distribution,

145–146Normal distribution

analyzing tail events, 292calculating risk factor

distributions, 272–273determinants, 244and marginal contribution to

volatility, 324overview, 144–146P&L distribution estimates, 154predicting tail events, 227–229

Normal mixture distributions,231

Normal-normal distribution,303–304

550 INDEX

Page 570: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

BINDEX 02/23/2012 15:9:14 Page 551

Normal trading, and trading lossevents, 108–111

Norway, systemic banking crisis(1987-1993), 134

Objective probability. SeeFrequency-type probability

One-off events, probability of. SeeBelief-type probability

Operational riskcapital charges and, 525–526loss events vs., 517, 519managing and mitigating,

513–514, 527market risk/credit risk vs., 515measuring and modeling losses,

523–526overview, 183–184, 514–519sources and types, 519–523

Operations/middle office (riskmanagement group), 89

Opportunity, capitalizing on, 6, 8,16, 530

Options, embedded, 70–71, 111Orange County, CA trading loss

(1994), 104, 110–111, 114,127

‘‘Order of Magnitude Physics’’(Sanhoy, Phinney, andGoldreich), 138–139

OTC (over-the-counter)transactions, 106, 115, 181,379, 509

Other mapping/binning, 210,215–216

Overconfidence, problem of, 62–65,172

P&L (profit and loss) distributionambiguity of, 140, 142–143and asset liquidity risk,

494–496

asset/risk factor mapping,210–216

as basis for financial riskmanagement, 7, 139–141,178,223–224

constructivist vs.marketapproach, 382–383

and costs of liquidation, 483,484–488

and credit risk, 377–378day-by-day P&L, 482–483estimating, general approach,

9–12, 139–141, 153–154,188, 206–210, 219–225,273–278

location and scale (dispersion),188

and operational risk, 515,519–523

and risk factor distribution,144–146, 216–219, 244

sources of variability, 8, 16–21,207

and static credit risk modeling,410

time scaling, 149–150, 200–202volatility and VaR and, 143–149,

199–200when comparing securities or

assets, 155–156Paradoxes, and ambiguity, 58–62Parametic approach/parametric

distributionasset-to-risk-factor mapping,

271historical and Monte Carlo

approaches vs., 224–225modeling, 220–221overview, 154, 216–217P&L distribution estimates,

273–274

Index 551

Page 571: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

BINDEX 02/23/2012 15:9:14 Page 552

Parametic approach (Continued )risk factor distribution, 272–273,

307–310second derivatives, 262–267tail events, 291–296volatility and VaR, 205–206,

223, 278–281Partitioning, 327, 361–362Past/future asymmetry, 36–37Pentagon Papers, 60Physical measure/actuarial

approach to risk pricing,459–460

Poisson distribution, 444, 479Poisson mixture models, 446–449,

455–456Poisson random variable, 430–432Popper, Karl, 47–48Portfolio allocation. See also P&L

(profit and loss) distribution;Risk management; Riskmeasurement

diversification and, 196,403–404

manager responsibilities, 3, 7Markowitz framework and, 19and P&L, 207

Portfolio analysis. See also specificrisk measurementapproaches

all-or-nothing contribution torisk, 327

asset liquidity risk estimates,492–494

best hedge position calculations,327–333

comparing positions, summarymeasures, 283–284

contribution to risk calculations,160–163, 317–327

and correlation, 218–219,326–327

day-by-day P&L, 482–483liquidation costs, simple CAC

portfolio, 488–491multi-asset replicating portfolio,

335–337principal components analysis,

337–346risk reduction potential

calculation, 314–317simple replicating portfolios,

333–335understanding and

communicating risk, 311,347–354

using copula and Monte Carloapproach, 300–306

using parametric approaches,291–296

volatility and VaR measures,270–283, 313–315

zero position contribution to risk,327

Price, Richard, 52Principal-agent problems, 68–69Principal components analysis

application to P&L estimates,344–346

basic concepts and approach,337–340

risk aggregation using, 340–344,370–375

user-chosen factors, 346using, 215, 312

Probabilityassumptions and, 31, 37–38Bayes’ rule (Theorem), 48–51belief-type probability, 45–47binomial distribution, 478combining belief-type and

frequency-type probability,52–53, 58

of default, modeling, 388–389

552 INDEX

Page 572: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

BINDEX 02/23/2012 15:9:14 Page 553

defined, 39frequency-type probability,

43–45gamma random variable, 479joint, in credit risk modeling,

426–429negative binomial distribution,

479–480nonintuitive approaches to, 24–25and past/future asymmetry, 36–37Poisson distribution, 479probability theory, history, 47probability paradoxes, 28–36and randomness, 22–24and runs, streaks, 25–28, 40–41uses for, 42

Process/business line operationalrisks, 521

Procter & Gamble trading loss(2007), 105, 110–111, 121

Profits. See P&L (profit and loss)distribution

Proxy mapping, 210, 216

Quantile distributions, 254–256,258–261

Quantitative risk measurement. SeeRisk measurement

Ramsey, Frank Plumpton, 48Randomness. See Uncertainty and

randomnessRandom walks, 28–30Reckoning with Risk: Learning to

Live with Uncertainty(Gigerenzer), 39

Regulation, 90–92, 131, 134,517–518, 520

Reinhart, Carmen M., 13, 133Replicating portfoliosmulti-asset portfolios, 335–337reporting, 354–355

stepwise procedure for,369–370

using, 164–167, 333–335volatility estimates, 329

Rescher, Nicholas, 6, 64, 530Reserves, 477Riskambiguity/uncertainty vs., 58defined, 15, 19, 178, 187–188idiosyncratic vs. systemic, 12–13,

102importance of managers’

understanding of, 73luck vs., 64multifaceted nature of, 17–20sources of, 158–160types of, overview, 178–184upside vs. downside, 16

Risk advisory director, 86Risk aggregation, 337–346Risk assessment (operational risk),

519–523Risk aversion, 59Risk communication/reportingbest hedges and replicating

portfolios, 354–355bottom-up vs. top-down

approach, 347consistency in, 7daily data reporting, 72–73,

177data inputs, 176–177importance, 4, 11, 39, 172, 311IT systems, 177marginal contribution to risk,

353–354risk management group, 89for sample portfolio, summary,

347–353for subportfolios, 355–360

Risk events. See Tail (extreme)events

Index 553

Page 573: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

BINDEX 02/23/2012 15:9:14 Page 554

Risk factor distributions, estimating,245–248, 272–273

Risk management. See also P&L(profit and loss) distribution;Portfolio analysis; Riskcommunication/reporting;Risk measurement

as core competence, 5, 67,101–102, 175–176, 529–530

credit risk management, 409–410fraud-preventing policies and

systems, 125–127goals and importance, 3–6,

70–71, 92heuristics, cautions about, 97–98infrastructure/programming

needs, 176–178, 391judgment and expertise needed

for, 55, 68, 94, 140–141,169, 171–172, 176, 189,203, 282–283, 333, 423,481, 493, 529

liquidity risk assessments,492–494

managing people, 68–71managing processes and

procedures, 71–72and operational risk, 504,

513–515, 527organizational structure/culture

and, 6, 71, 84, 87parties responsible for, 3–4, 7–9,

12, 85–87, 89processes and procedures for, 39,

71–72, 86–90, 92–93probabilistic intuition, 42–43risk measurement vs., 3, 5understanding day-by-day P&L,

312, 482–483understanding tail events and

systemic failures, 13,296–299, 512

using risk professionals, 138Risk measurement. See also P& L

(profit and loss) distribution;Risk management andspecific measurement toolsand approaches

best hedge position calculations,164, 327–333

frequency-and belief typeprobabilities, 52

comparing securities and assets,155–157, 285–286

consistent measurements, toolsfor, 7, 43, 67–68, 184–185

contribution to risk calculations,160–163, 317–318

credit risk vs.market risk, 390data needs and sources, 7–9, 379distributions/density, 16–17,

20–21, 144–146expected shortfall calculations,

199–200funding liquidity risk, 498–504identifying sources and direction

of risk, 340–344interest rate swaps (IRS), 74–79importance and goals, 4, 96–97,

175–176independence of within corporate

structure, 87information technology

infrastructure needs, 72–73language of quantification, 6–7limitations, 5, 170–172market approach, 382–383measuring extreme (tail) events,

151–153portfolio management tools,

311–312principal components analysis,

337–346risk management vs., 3, 5

554 INDEX

Page 574: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

BINDEX 02/23/2012 15:9:15 Page 555

standard vs. extreme conditions,202–203, 291–296

summary measures, 19–21,188–189

uniform foundation for, 4using approximation, simple

answers, 138–139RiskMetrics, 215–216, 318, 327Risk-neutral risk pricing, 461–464Risk pricing, 461–472Risk reduction potential calculation,

314–317Risk unit (risk management group),

89, 137–138Risk-weighted assets, 91–92Rogoff, Kenneth S., 13, 133Rogue trading, 103, 112Rosenhouse, Jason, 32–33Rubin, Howard A., 119Runs, streaks, 25–28, 40–41Rusnak, John, 118Russia, systemic crises in, 76, 78

S&P500, average daily volatilityand return, 20, 159, 189,249, 422–423

Sadia trading loss (2008), 104,110–111, 118, 128, 130

Sampling distribution, 256, 258–261Santayana, George, 206Savage, Leonard J., 48Vos Savant, Marilyn, 32–33Scale (dispersion), 20–21, 141, 146,

153, 188–190, 291–292Schoutens, Wim, 226, 227–229Second derivatives, 262–267,

307–310Segars, Johan, 226, 227–229Selvin, Steve, 32Senior managers, 7–9, 12, 85–87, 89Sensitivity, measuring, 8Settlement risk, 181, 379

Shareholders, 68–69, 71, 85, 123,125, 129, 413

Share value, bonds, 71, 74–83Shaw, W. T., 305Shortfall, expected. See VaR

(Value at Risk), 365–368Showa Shell Sekiyu trading loss

(1993), 104, 108–109, 114,124, 128, 130

Siegel, Larry B., 71s (sigma). See VolatilitySingle assetsanalyzing tail events, 292–293calculating marginal contribution

to risk, 362–363Single-firm (marginal) migration

matrixes, 421–425Single-issuer credit risk, 181, 378Skewness, 394–403Social proof (herding instinct), 98Societe Generale trading loss (2008),

92, 108–109, 113, 124, 130South Sea Bubble, 132–133Spain, systemic banking crisis

(1977-1985), 134Speculation, failures associated

with, 78, 103, 127–130Standard error, 254–256, 261, 405Standardizing positions, summary

measures for, 283–284State of West Virginia trading loss

(1987), 104, 108–109, 119,124, 128

Static structural riskmodels, 411–416Statistical approaches and

randomness, uncertainty,21–39. See alsoDistributions; P&L (profitand loss) distribution; Riskmeasurement; VaR (value atrisk); Volatility and specificstatistical approaches

Index 555

Page 575: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

BINDEX 02/23/2012 15:9:15 Page 556

Statistical or empirical factormapping, 210, 214–215

Staying Alive in Avalanche Terrain(Tremper), 96–98

Stochastic dominance, 17Stock prices

application of frequencyprobability to, 44–45

risks associated with, 95Strategic operational risks, 520–521Structural credit risk models,

410–411Student-normal distribution, 303–304Student-Student distribution,

303–304Student t distribution, 230–236,

291–296, 305–306alternate student distribution,

303–306Stylized credit risk model, 384–391Stylized financial time series,

245–248Subadditivity, 196–198Subjective probability. See Belief-

type probabilitySubportfolio analysis, 355–362Sumitomo Corporation trading loss

(1996), 104, 108–109, 114,124

Summary measuresfor aggregating risk, 285–290distribution/density functions,

19–21limits of, 205for standardizing and comparing

positions, 285–286for tail events, 290–306for volatility and VaR, 270–283

Summary risk report, 349Supervision, lax, 109–112, 116,

129, 131Swap rates and spreads, 76–78

Sweden, systemic banking crisis(1991-1994), 134

Symmetric distributions, 190–191,195–196

Systemic riskcosts, 102–103idiosyncratic risk vs., 12–13, 102and managing liquidity crises,

512systemic financial events, 132–135

Tail (extreme) eventsanalytic tools and techniques,

226–230copulas, 148–149, 241–243distribution, order statistics,

256extreme value theory, 151–152,

237–241, 296–306, 327idiosyncratic, 103and limits of quantitative

approach, 172measuring, 1391974-2008, summary, 103–122parametric analysis for single

asset, 291–296Student t distribution, 230–236Two-point mixture of normals

distribution, 231–236understanding, importance,

101–102use of VaR for, 149, 203–205variability among, 205

Taleb, Nassim, 24Temporal factors, 380Thirlwell, John, 513–514, 517, 521,

523Threshold models. See Structural

credit risk modelsTime scaling, 149–150, 200–202Time-series econometrics, 244–248Titanic disaster example, 102

556 INDEX

Page 576: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

BINDEX 02/23/2012 15:9:15 Page 557

Traders, compensation approaches,70–71

‘‘trader’s put,’’ 70–71Trading in excess of limits, 108–111Trading loss events, 1974-2008categories of loss, 107–112failure to segregate and lax

supervision, 131from fraudulent practices,

107–112, 123–125, 127from legitimate business

practices, 127lessons learned from, 131–132loss accumulation periods,

130–131main causes, 131from non-fraudulent or

tangentially fraudulentpractices,127–130

size and description of loss,113–122

summary table, 103–107A Treatise on Probability (Keynes),

48Tremper, Bruce, 96–98, 101Triangle addition for volatility,

313–315, 325–326Tversky, Amos, 22–24Two-point mixture of normals

distribution, 231–236

Uncertainty/randomnessambiguity aversion, need for

control, 38–39, 59, 62–64,142–143

and human intuition, 22–26,37–38

and past/future asymmetry,36–37

and people management, 69–70and risk management, 140

risk vs., 58runs, streaks, 25–28, 40–41sources, overview, 251–252and volatility, VaR, 252–254,

283–284Union Bank of Switzerland (UBS)

trading loss (1998), 104,110–111, 117, 127, 129, 131

Unique risk ranking, 18United StatesS&L crisis (1984-1991), 134Treasury rates, 76–78

U.S. Treasury bondcalculating DV01/bpv, 154–155distribution and tail behavior, 153marginal contribution

calculations, 161–165P&L distribution example,

141–142time scaling example, 150volatility, 143–144

Valuation model for asset mapping,210, 211–212

Value Trust Fund winning streak,53–58

VaR (Value at Risk)for aggregating risk, 285–290all-or-nothing contribution

calculations, 327calculating, 221, 223–224, 299,

306conditional VaR/expected

shortfall, 199–200contribution to risk calculations,

316–326, 365–368credit risk modeling using,

391–393defined, 10, 191–193interpreting, cautions, 89,

146–148, 170–171,204–206, 283–284

Index 557

Page 577: Quantitative Risk Management - Donutsdocshare01.docshare.tips/files/31781/317812447.pdfRisk Management versus Risk Measurement 3 CHAPTER 2 Risk, Uncertainty, Probability, and Luck

BINDEX 02/23/2012 15:9:15 Page 558

VaR (Continued )for liquidation cost estimates,

488–491, 494–496probability expressions, 42, 191relation to volatility, 194–195reporting risk estimates using,

352–353for single bond position, 270–283small-sample distribution,

254–261for standardizing and comparing

positions, 283–284and subadditivity, 196–198and symmetric vs. asymmetric

distribution, 194, 206for tail events, 226, 295with two-point mixture of normal

distribution, 232using effectively, 148–149,

158–160, 202–206variability in over time, 150,

252–254Variance, 143, 256–258Variance-covariance distribution,

216–217, 251. See alsoParametric approach to riskestimation

Variance-covariance matrix, 221,244, 249, 251, 292, 294,319, 371–375

Vasicek, Oldrich, 416–417Venn, John, 47–48Volatility (; standard deviation)

aggregating/summarizing riskusing, 158–160, 283–290

best hedge positions/replicatingportfolios, 164–166, 329–333

contribution to risk calculations,162–163, 316–326

estimating, 7–10, 143, 153–154,189, 223–224, 279, 306

exponential weighting, 250–251

interpreting, cautions, 143–144,170–171

liquidity risk estimates, 487–491,494–496

low vs. high dispersion, 190marginal contribution

calculations, 160–162,318–326,362–368

market/cash distribution, 509–510relation to VaR, 194–195reporting risk estimates using,

351–352for single bond position, 270–283for tail events, 295triangle addition for, 313–315,

325–326using effectively, 148–149,

190–191, 312–313variability of over time, 150, 245,

248–249variance-covariance distribution

estimates, 248–251volatility estimates for simple

portfolio, 329–333Volatility (standard deviation),

measurement uncertainties,283–284

Volatility point, 78Voltaire, 101Von Mises, Richard, 47

Weather predictions, 45Weatherstone, Dennis, 203WestLB trading loss (2007), 104,

110–111, 119, 127, 129Worst-case situations, 171,

203–205

Yates, Mary, 101

Z% VaR, 194, 205, 232

558 INDEX