The Smoking Cigar of Behavioural Bias
Not all failures of investment logic are based in human psychological flaws but, to paraphrase Freud, although sometimes a cigar is just a cigar mostly it’s behavioural bias. The smoking gun is almost invariably linked to people doing predictably stupid things. Like building shacks on earthquake fault lines, thinking they can banish risk with a spreadsheet and regarding the lessons of history as too remote to be interesting.
Sadly the fact that these things are predictable doesn’t make them any less easy to deal with. Our current set of financial woes is a wonderful test bed for those inclined to point to the short-termist biases inherent in the human conditions. We’d do well to enshrine these lessons in our systems now, because it won’t be long before we’ll start to forget.
The Causes of the Crisis
Andrew Haldane of the Bank of England has written a couple of insightful and globally applicable papers about the failures of bank risk management and financial regulation over the past decade in Why Banks Failed The Stress Test and Small Lessons From a Big Crisis. Let’s briefly summarise his findings.
Firstly banks took on excess leverage, probably to keep their returns in line with those of their competitors. Haldane suggests that historically banks grew in line with stockmarkets until 1985 when suddenly they started making money hand over fist – a fact he traces to their expanded use of debt backed by less and less capital. Basically, the banks lent more and more money on increasingly less security – which is great when your assets are going up in value but can be deadly when they drop. And don’t we know it.
Then banks justified the increase in leverage through the use of quantitative risk management models. Just as you can build a bridge with less steel if you can accurately calculate how it reacts under load so banks believed their new models allowed them to take on more leverage because they could more accurately calculate the risks they were taking.
Unfortunately these models failed to use data from previous eras that might have suggested that there would be a crisis of some kind, just around the corner. It’s like taking out the airbags in your car because you haven’t had an accident since you fitted a SatNav system. Correlation is not causality.
OTC Market Failures
Next, most of the markets that failed spectacularly were those that relied on Over-The-Counter trading, in which a seller and a buyer must be directly connected. When I sell my shares I do so through the central clearing house that is the stockmarket, I don’t need a direct relationship with the purchaser. OTC markets, though, can seize up when you can’t find a counterparty to buy an asset, sending prices plummeting and depriving banks of vital liquidity.
Worse still financial institutions turned out to be completely useless at assessing counterparty risk. Imagine that you've bought insurance from an insurance company and that company has offset that risk by reselling it multiple other insurers. You may be able to assess the direct risk of your counterparty insurer failing but how do you manage the risk of that company’s counterparties failing? This network of risk, a veritable cascade of Chinese Whispers, failed dramatically in the wake of the collapse of Lehman Brothers, where auditors are still trying to unwind the cats’ cradle of inter-related risks.
Finally regulators failed to realise that they had got the risk profile of institutions inverted. While they were paying most attention to smaller banks with less diversity in their portfolios it turned out that it was the giant players who were the main culprits. Essentially the bigger banks were the source of most of the global contagion and were, to boot, too big to fail.
Ignore the Proximate Causes
Although these points trace out the spider’s network of causality which has led to the grim unwinding of the excessive leverage of the world’s banks, it’s the psychological drivers behind them that interest us here. No matter what mechanisms regulators develop to protect us from similar problems in the future they’ll always fail while they focus on proximate causes, rather than the underlying issues. Regulators are, like generals, always fighting the last battle.
We can trace all of the immediate causes of the crisis to a combination of human psychological biases. Each, on their own, can create significant problems but taken together they create what Charlie Munger refers to as a lollapalooza effect: magnifying the effects to cause huge problems.
The Lollapalooza Cascade
The first factor seems to be a simple application of the availability heuristic. This is the human bias where things that have happened recently are more ‘real’ than those that have happened further out in time. A secondary effect of this is simply that people discount small probability effects that have catastrophic impacts to zero – people wouldn’t build houses on earthquake faults otherwise. This effect, known as disaster myopia, simply means that after a while the memories of the last great financial crisis fade and eventually are completely discounted. People really do believe that it’s different this time.
Once the availability bias triggered we saw overconfidence in quantitative risk management models, leading to over-leveraging of bank balance sheets and a furious race to the bottom as bankers tried to match each other’s excessively geared returns on equity. It was pure hubris to believe that these new models had banished risk and on any qualitative assessment of network counterparty risk it’s obvious that banks couldn’t accurately assess the risks they were taking. Stress testing, as it’s called, was a work of fiction.
The psychological superchargers then kicked in as banks started to chase each other to generate ever greater returns, egged on by the egregiously large rewards for senior executives who succeeded. Failure to recognise that earnings growth was simply a side-effect of taking on more and unmanaged risk meant that increasingly perverse incentives drove increasing risk taking behaviour and weaker and weaker risk management.
Finally it seems that the objectives and incentives of bankers and regulators were horribly mismatched. Regulators were working on the basis that bankers shared their concerns and objectives and wouldn’t put their institutions at risk. Bankers, on the other hand, were cynically making the calculation that governments couldn’t allow them to fail. We all know who won that particular battle. Regulators are still coming to terms with the fact that their pet poodle banks were actually ravening wolves.
It’s Never Different This Time
To fight the next war, rather than the last one, regulators need to enshrine human nature with all of its flaws in their approaches. You can’t simply do that with a rule book but you need a system with can flex and bend with changing economic conditions while recognising that that behavioural biases will inevitably warp the way that people attempt to game the regulatory and financial system. This is true both on the way up and on the way down.
Of course, memories will fade, new systems of risk management will be created, overconfidence will kick in, the incentives to behave perversely will creep through the global network and one day we’ll have another crisis. I probably won’t be around to see another one on this scale but I absolutely guarantee that you’ll find earnest financiers explaining once more how that it’s different this time.
Engrave it on people’s hearts and in regulator training manuals: it’s not different this time, next time or any time. It never is and never can be. It’s only the story that changes because the people remain the same, flawed creatures they always have been.
Related Articles: Gaming The System, Perverse Incentives Are Daylight Robbery, Quibbles With Quants
Not all failures of investment logic are based in human psychological flaws but, to paraphrase Freud, although sometimes a cigar is just a cigar mostly it’s behavioural bias. The smoking gun is almost invariably linked to people doing predictably stupid things. Like building shacks on earthquake fault lines, thinking they can banish risk with a spreadsheet and regarding the lessons of history as too remote to be interesting.
Sadly the fact that these things are predictable doesn’t make them any less easy to deal with. Our current set of financial woes is a wonderful test bed for those inclined to point to the short-termist biases inherent in the human conditions. We’d do well to enshrine these lessons in our systems now, because it won’t be long before we’ll start to forget.
The Causes of the Crisis
Andrew Haldane of the Bank of England has written a couple of insightful and globally applicable papers about the failures of bank risk management and financial regulation over the past decade in Why Banks Failed The Stress Test and Small Lessons From a Big Crisis. Let’s briefly summarise his findings.
Firstly banks took on excess leverage, probably to keep their returns in line with those of their competitors. Haldane suggests that historically banks grew in line with stockmarkets until 1985 when suddenly they started making money hand over fist – a fact he traces to their expanded use of debt backed by less and less capital. Basically, the banks lent more and more money on increasingly less security – which is great when your assets are going up in value but can be deadly when they drop. And don’t we know it.
Then banks justified the increase in leverage through the use of quantitative risk management models. Just as you can build a bridge with less steel if you can accurately calculate how it reacts under load so banks believed their new models allowed them to take on more leverage because they could more accurately calculate the risks they were taking.
Unfortunately these models failed to use data from previous eras that might have suggested that there would be a crisis of some kind, just around the corner. It’s like taking out the airbags in your car because you haven’t had an accident since you fitted a SatNav system. Correlation is not causality.
OTC Market Failures
Next, most of the markets that failed spectacularly were those that relied on Over-The-Counter trading, in which a seller and a buyer must be directly connected. When I sell my shares I do so through the central clearing house that is the stockmarket, I don’t need a direct relationship with the purchaser. OTC markets, though, can seize up when you can’t find a counterparty to buy an asset, sending prices plummeting and depriving banks of vital liquidity.
Worse still financial institutions turned out to be completely useless at assessing counterparty risk. Imagine that you've bought insurance from an insurance company and that company has offset that risk by reselling it multiple other insurers. You may be able to assess the direct risk of your counterparty insurer failing but how do you manage the risk of that company’s counterparties failing? This network of risk, a veritable cascade of Chinese Whispers, failed dramatically in the wake of the collapse of Lehman Brothers, where auditors are still trying to unwind the cats’ cradle of inter-related risks.
Finally regulators failed to realise that they had got the risk profile of institutions inverted. While they were paying most attention to smaller banks with less diversity in their portfolios it turned out that it was the giant players who were the main culprits. Essentially the bigger banks were the source of most of the global contagion and were, to boot, too big to fail.
Ignore the Proximate Causes
Although these points trace out the spider’s network of causality which has led to the grim unwinding of the excessive leverage of the world’s banks, it’s the psychological drivers behind them that interest us here. No matter what mechanisms regulators develop to protect us from similar problems in the future they’ll always fail while they focus on proximate causes, rather than the underlying issues. Regulators are, like generals, always fighting the last battle.
We can trace all of the immediate causes of the crisis to a combination of human psychological biases. Each, on their own, can create significant problems but taken together they create what Charlie Munger refers to as a lollapalooza effect: magnifying the effects to cause huge problems.
The Lollapalooza Cascade
The first factor seems to be a simple application of the availability heuristic. This is the human bias where things that have happened recently are more ‘real’ than those that have happened further out in time. A secondary effect of this is simply that people discount small probability effects that have catastrophic impacts to zero – people wouldn’t build houses on earthquake faults otherwise. This effect, known as disaster myopia, simply means that after a while the memories of the last great financial crisis fade and eventually are completely discounted. People really do believe that it’s different this time.
Once the availability bias triggered we saw overconfidence in quantitative risk management models, leading to over-leveraging of bank balance sheets and a furious race to the bottom as bankers tried to match each other’s excessively geared returns on equity. It was pure hubris to believe that these new models had banished risk and on any qualitative assessment of network counterparty risk it’s obvious that banks couldn’t accurately assess the risks they were taking. Stress testing, as it’s called, was a work of fiction.
The psychological superchargers then kicked in as banks started to chase each other to generate ever greater returns, egged on by the egregiously large rewards for senior executives who succeeded. Failure to recognise that earnings growth was simply a side-effect of taking on more and unmanaged risk meant that increasingly perverse incentives drove increasing risk taking behaviour and weaker and weaker risk management.
Finally it seems that the objectives and incentives of bankers and regulators were horribly mismatched. Regulators were working on the basis that bankers shared their concerns and objectives and wouldn’t put their institutions at risk. Bankers, on the other hand, were cynically making the calculation that governments couldn’t allow them to fail. We all know who won that particular battle. Regulators are still coming to terms with the fact that their pet poodle banks were actually ravening wolves.
It’s Never Different This Time
To fight the next war, rather than the last one, regulators need to enshrine human nature with all of its flaws in their approaches. You can’t simply do that with a rule book but you need a system with can flex and bend with changing economic conditions while recognising that that behavioural biases will inevitably warp the way that people attempt to game the regulatory and financial system. This is true both on the way up and on the way down.
Of course, memories will fade, new systems of risk management will be created, overconfidence will kick in, the incentives to behave perversely will creep through the global network and one day we’ll have another crisis. I probably won’t be around to see another one on this scale but I absolutely guarantee that you’ll find earnest financiers explaining once more how that it’s different this time.
Engrave it on people’s hearts and in regulator training manuals: it’s not different this time, next time or any time. It never is and never can be. It’s only the story that changes because the people remain the same, flawed creatures they always have been.
Related Articles: Gaming The System, Perverse Incentives Are Daylight Robbery, Quibbles With Quants
I agree that it wasn't at all different this time. Investors (and mot investing "experts" too) have been making the same basic mistake over and over again ever since the first stock market opened for business. I believe that the answer is the rejection of the Passive Investing model of understanding how stock investing works.
ReplyDeleteIn the Passive Model, investors do not change their stock allocations in response to price changes. Thus, most investors end up wildly overinvested in stocks at the times when stocks are most dangerous (when prices are high) and wildly underinvested in stocks at the times when stocks are most appealing (when prices are low). Investors who are at most times going with wildly improper stock allocations become more and more emotional over time.
A valuation-informed model (I call this model the Rational Model) does just the opposite. It uses the historical stock-return data to show investors that long-term returns are far better at times of fair prices and that stocks are insanely dangerous at times when prices are as high as they were from 1996 through 2008 and it encourages investors to adjust their allocations as needed to keep their risk levels roughly constant.
Investors who invest rationally are able to keep their emotions in check. When prices are high and the "experts" are saying to put everything in stocks, they check the historical data and see that super-safe asset classes like IBonds and CDs offer a better long-term value proposition than stocks and set their allocations accordingly. When prices are low and the "experts" are saying that everyone should get out of stocks, they check the historical data and see that stocks purchased at low prices offer an amazing long-term value proposition.
The key to successful stock investing is getting one's emotions under control. The Passive model makes this impossible because it urges investors to ignore the effect of valuations on long-term returns, the most important factor that investors should be looking at when setting or adjusting their allocations. Once you get your stock allocation right (this becomes possible only with abandonment of the Passive model), it becomes possible to diminish the influence of emotion on investing decisions and to get all other sorts of things right too.
Rob