I had the privilege of hearing the Bank of
England’s Executive Director of Financial Stability, Andy Haldane speak at the
University of Edinburgh on the 8th June. Haldane presented his new
paper. For those familiar with Haldane’s output, this continues his
familiar style of jargon-free prose and ideas expressed with unerring
simplicity and logic, supported by an impressive armature of complex empirical
exhibits. This paper moves beyond previous concerns with the scale of bank
balance sheets and the problems of interconnectedness in complex systems, and
instead begins to unpack the theoretical assumptions that underpin dominant
neoclassical theories of mathematical finance and form the skeleton of many
financial products.
His argument is that the normal or Gaussian
statistical distributions (the bell curve) that are at the heart of these
assumptions do not accurately describe huge swathes of human and natural phenomena,
whether it is the volume of monthly rainfall, earthquake intensity over time,
the occurrence of unique words in novels, historical prices in rice spot
markets or equity markets, or even annual growth in GDP or real bank loans. These
examples are instead characterized by non-normal or ‘fat tailed’ distributions,
where extreme events are much more likely to occur than Gaussian distributions
imply.
Such ideas are not new: Nassim Taleb
and Benoit
Mandelbrot have argued this point for some time. But Haldane’s novelty lies
in his history of ‘normality’, which outlines the migration of Gaussian
principles from the physical sciences into economics and mathematical finance. Haldane’s
view is that this migration is part memetic transmission across the sciences and
part inspired by a kind of primordial desire for symmetry within the human
brain which makes normal ‘bell curve’ distributions appealing in a very
aesthetic sense. Breaking with normalcy is therefore difficult but essential if
financial regulation is to do what is now done in meteorology, where complex
computer systems using non-normal models have been better able to predict and
prepare for weather system risks. Haldane’s conclusion was that it is only by
using non-normal, fat-tailed models and mapping system risk that the effects of
financial crisis will diminish over time.
Haldane’s work has again introduced many thought-provoking
observations on the world of banking and finance, as well as practical
solutions for how it might be better regulated. But I am still left with one
misgiving, which was expressed in our earlier
CRESC work on Haldane: that weather systems have neither the capacity nor
the incentive to game those non-normal models or maps of system risk once
introduced. Finance does, and therefore the extent to which the methods used to
prepare for fat tail events in the natural world can be transposed effectively to
the world of financial regulation remains moot. Perhaps for that reason I am
more receptive to Taleb’s view – that there are limits to the predictive power
of statistics in complex systems like finance, and so the job at hand is to
make the system smaller and simpler.
Haldane’s convincing empirical exhibits demonstrate
the prevalence of fat-tail distributions in many walks of life. But they do raise
an interesting paradox: why do banks – or perhaps more accurately the quants
working within banks - persist with Gaussian models if normal distributions in
economic and financial systems are so very rare? This was the interesting start
point for Donald MacKenzie, Professor of Sociology at Edinburgh University, whose
paper
followed Andy Haldane’s.
MacKenzie’s answer, based on a detailed
ethnography of bank quants was that the majority simply do not believe in
Gaussian copula models. MacKenzie’s story is one of quants married to the
aesthetic of mathematical purity and rigour, who embrace the elegance of a
model like Black Scholes, but hold little regard for the Gaussian copula. This
is a problem when most financial risk measures like Value
at Risk and the structuring and pricing of financial products, like CDOs
are built upon Gaussian principles. So why continue using them? Here, MacKenzie
argues, the reasons are rooted fundamentally in culture – an ‘evaluative
culture’ with Gaussian copula models at their centre, where exit costs are
high. Those exit costs relate specifically to the aim of securing ‘Day One P&L’
– the lump of risk free profit on a deal from which bonuses are allocated.
MacKenzie explains that if non-Gaussian rather
than Gaussian assumptions were used in the measurement of risk, then Day One
P&L would be much smaller and perhaps even impossible to calculate because
many more risky scenarios and unanticipated events would need to be priced in:
the lump of profit would not be ‘risk-free’.
This finding very much chimes with
discussions and briefing notes that were passed around the secretive ‘dark
pool’ exchange that is CRESC whilst writing our book. These ideas broadly suggested
that the role of derivatives in banking had been misunderstood. That a Credit
Default Swap was not necessarily a tool of risk management or an instrument of wanton
speculation, but a vital component in underwriting bankers’ high pay. If securitization
was always about bringing forward revenues from the future and realizing them
in the present, then derivatives played a vital part in passing on the
uncertainties of the future to another party. Thus various swaps would enable
banking divisions to strip default risk, interest rate risk etc from the block
of revenue on a deal, leaving behind a notionally risk-free lump. Locking in ‘arbitrage
profits’ for example by holding AAA securities, financing them with a repo and
selling on the default risk via a CDS, would enable the deal brokers to get the
revenues onto the P&L and claim their bonus. Of course, this incentive encouraged
the expansion of a vast transaction-generating machine, as mortgage volumes were
ramped up and worked through the CDO mincer, while risk was passed on to naïve
operators at AIG and monoline insurers. The result: system-wide counterparty
risk that blew up spectacularly in the aftermath of Lehmans collapse.
Two things emerge from this for me. First,
an intellectual question about where performativity
theory goes from hereon in? My (albeit limited) understanding of Callonian
influenced writing (with which MacKenzie has aligned himself in the past) is
that economics performs the economy – it creates the economy in its own image,
provided the correct assemblages can be mobilised to bring those assumptions
into reality. What MacKenzie is now describing, it seems to me, is something
quite different: that the desire to maximise Day One P&L (a financial
incentive in other words) influences the models used within specific evaluation
cultures, even though those models appear to bear little relation to empirical
outcomes over time. This is not a million miles away from questions we have
been asking for some time: why those models and why those models at that time?
Second, it also raises an interesting
question about how you might regulate such institutions going forward. If that
evaluative culture could change, and that non-normal models were used to price
in fat tail events, it would remove many of the more pernicious incentives we
currently see in the banking sector. If lawyer and accountancy costs were
booked up front on deal and revenues were not realized immediately, those
products would initially be loss making, and only become profitable after a
period of years. It would therefore tie in bonus pay better to the long term
performance of the particular products created. Structurers of those products might
also have to contemplate counterparty risks going forward, and thus think
reflexively about whether they were passing on ‘too much’ risk to others –
rather than maximizing volume and passing off risk as ‘somebody else’s
problem’. It may also resolve intra-firm moral hazard when many ‘innocent’
bankers are penalized by excessive risk taking in another banking division
which causes the value of their bonus options collapse.
Stanley