Escape from Berlin
It's 13th July 1938 and an elderly lady of Jewish extraction is boarding a train in Berlin for the Dutch border, barely escaping the grasp of the Nazi authorities. In one of the great switchback points in history the onrushing locomotive of destiny has found itself diverted down a path that will lead, circuitously, to the end of the Second World War, the triumph of the great liberal democracies and the rise of quantitative financial modelling.
Her name is Lise Meitner and now, as we leave her travelling her lonely path to exile in Sweden, she has just 10 marks in her purse. And the key to the atomic bomb in her head.
Fermi’s Error
Had history unravelled differently nuclear fission would have been discovered in Fascist Italy in the early 1930’s. Enrico Fermi, one of the greatest scientists of the last century, had actually achieved this by 1934 but he and his team completely misinterpreted the results – a happenstance that looks suspiciously like it was born of confirmation bias: Peter Galison gives a good overview of the events in Author of Error.
The Nobel committee awarded Fermi its Physics prize in 1938 for the discovery of new transuranic elements – the elements beyond uranium – but by the time he came to write his acceptance speech a research group in Berlin had come to a rather less exciting conclusion. Fermi’s team had actually managed to turn uranium into barium – which means that he’s the only person to receive the coveted award for discovering a overly complex way of creating rat poison.
Meitner’s Insight
The Berlin team, led by Otto Hahn, were left to puzzle over these results, a process interrupted when Meitner, the theoretical cuckoo in their midst, was forced to flee to Sweden. There, in collaboration with her nephew and in correspondence with Hahn, she figured out what was going on: what Fermi had seen, and Hahn was replicating, was nuclear fission. A secret that might have been Nazi Germany’s alone was out in the open and triggered a chain reaction of a different kind: one that directly led to the establishment of the Manhattan Project.
Now, of course, the Manhattan Project is justly famous for the development of the atom bomb but like many such projects the scientific advances required to achieve the ultimate ends led to a range of spin-off discoveries. One of these was the full development of so-called Monte Carlo methods, initially pioneered by Fermi and used in simulations of nuclear detonation.
In fact the development of another spin-off, the first electronic computer ENIAC, came too late to aid in the initial development of the atomic program, and without this Monte Carlo methods are of limited use due to their intensive processing requirements. By 1945, however, with ENIAC ready Johnny von Neumann suggested using it to simulate thermonuclear reactions. The successful story is told by Nicholas Metropolis in The Beginning of the Monte Carlo Method.
Monte Carlo Battleships
So what is this method? Well models generate a deterministic response when provided with the same input: if you put the same x in you always get the same y out. In the real world, though, many systems never actually receive the same x more than once and that means that the outputs can’t really be predicted because we can’t try every single input. The real-world tends to be infinite, not conveniently bounded.
A Monte Carlo model basically works by identifying a set of possible inputs, selecting from them at random, applying these to the model and then analysing the output. A simple example is a game of battleships where your opponent has a bunch of “ships” somewhere in a grid and you have to guess where they are. Your set of possible inputs are the grid co-ordinates and if you select from these at random, apply them to the model and analyse the results you end up with some kind of approximation of what the actual configuration of ships on the grid looks like.
Bombing Seascapes
Now, obviously, the more inputs – samples – you apply the more likely you are to get a better idea of what the true picture is. However, it’s also obvious that if your inputs aren’t truly random – and say focus on one corner of the grid – you’re unlikely to get a decent idea of the shape of the battlefield. Even worse, if your inputs target co-ordinates that aren’t on the grid at all you don’t have a hope of figuring out what’s going on. Of course, in this particular example you could eventually try every single grid co-ordinate but when you’re dealing with the real world that’s not going to be possible.
So figuring out the possible range of inputs and then correctly generating random samples from this range are critical to the modelling process. If you don’t get these right then the model itself is useless, because you’ll either be depth charging submarines off the grid entirely or be continuously carpet bombing empty seascapes.
Monte Carlo Finances
Now generally what physicists first do intelligently economists copy hopefully. So when back in '64 David Hertz suggested that Monte Carlo analysis could be used to model business problems this resulted in lots of excited financiers developing nice models. In particular they started developing models of risk.
Unfortunately once you give an analyst an analytical tool they’re highly likely to start using it in inappropriate ways. Monte Carlo analysis, done properly, is darned difficult when you’re applying it to real-world situations, especially those involving irrational humanity, and shouldn’t be used unless there’s no other option. As David Nawrocki comments:
And why is this? It’s our old friend, uncertainty:
Lise Meitner’s Prize
Of course Lise Meitner didn’t use or require Monte Carlo analysis but instead did her own heavy lifting to figure out the physics behind nuclear fission. Her key role in this, and the chain reaction of events that ended in Hiroshima and the Cold War, is oft-overlooked – and she’d be pleased about that, for she refused to work on the A-bomb research of the Manhattan Project.
In the aftermath of the Second World War a Nobel Prize for the discovery of nuclear fission was awarded – to Otto Hahn alone. Shamefully, Lise Meitner’s role was ignored and the omission never corrected. Still, if we study carefully the upper reaches of the Periodic Table, in the transuranic elements that Fermi got the Nobel Prize for not discovering, it becomes apparent that her contribution to science has, quite rightly, not been entirely forgotten.
Related articles: The Lottery of Stock Picking, Risky Bankers Need Swiss Cheese, Not VaR, Physics Risk Isn't Market Uncertainty
It's 13th July 1938 and an elderly lady of Jewish extraction is boarding a train in Berlin for the Dutch border, barely escaping the grasp of the Nazi authorities. In one of the great switchback points in history the onrushing locomotive of destiny has found itself diverted down a path that will lead, circuitously, to the end of the Second World War, the triumph of the great liberal democracies and the rise of quantitative financial modelling.
Her name is Lise Meitner and now, as we leave her travelling her lonely path to exile in Sweden, she has just 10 marks in her purse. And the key to the atomic bomb in her head.
Fermi’s Error
Had history unravelled differently nuclear fission would have been discovered in Fascist Italy in the early 1930’s. Enrico Fermi, one of the greatest scientists of the last century, had actually achieved this by 1934 but he and his team completely misinterpreted the results – a happenstance that looks suspiciously like it was born of confirmation bias: Peter Galison gives a good overview of the events in Author of Error.
The Nobel committee awarded Fermi its Physics prize in 1938 for the discovery of new transuranic elements – the elements beyond uranium – but by the time he came to write his acceptance speech a research group in Berlin had come to a rather less exciting conclusion. Fermi’s team had actually managed to turn uranium into barium – which means that he’s the only person to receive the coveted award for discovering a overly complex way of creating rat poison.
Meitner’s Insight
The Berlin team, led by Otto Hahn, were left to puzzle over these results, a process interrupted when Meitner, the theoretical cuckoo in their midst, was forced to flee to Sweden. There, in collaboration with her nephew and in correspondence with Hahn, she figured out what was going on: what Fermi had seen, and Hahn was replicating, was nuclear fission. A secret that might have been Nazi Germany’s alone was out in the open and triggered a chain reaction of a different kind: one that directly led to the establishment of the Manhattan Project.
Now, of course, the Manhattan Project is justly famous for the development of the atom bomb but like many such projects the scientific advances required to achieve the ultimate ends led to a range of spin-off discoveries. One of these was the full development of so-called Monte Carlo methods, initially pioneered by Fermi and used in simulations of nuclear detonation.
In fact the development of another spin-off, the first electronic computer ENIAC, came too late to aid in the initial development of the atomic program, and without this Monte Carlo methods are of limited use due to their intensive processing requirements. By 1945, however, with ENIAC ready Johnny von Neumann suggested using it to simulate thermonuclear reactions. The successful story is told by Nicholas Metropolis in The Beginning of the Monte Carlo Method.
Monte Carlo Battleships
So what is this method? Well models generate a deterministic response when provided with the same input: if you put the same x in you always get the same y out. In the real world, though, many systems never actually receive the same x more than once and that means that the outputs can’t really be predicted because we can’t try every single input. The real-world tends to be infinite, not conveniently bounded.
A Monte Carlo model basically works by identifying a set of possible inputs, selecting from them at random, applying these to the model and then analysing the output. A simple example is a game of battleships where your opponent has a bunch of “ships” somewhere in a grid and you have to guess where they are. Your set of possible inputs are the grid co-ordinates and if you select from these at random, apply them to the model and analyse the results you end up with some kind of approximation of what the actual configuration of ships on the grid looks like.
Bombing Seascapes
Now, obviously, the more inputs – samples – you apply the more likely you are to get a better idea of what the true picture is. However, it’s also obvious that if your inputs aren’t truly random – and say focus on one corner of the grid – you’re unlikely to get a decent idea of the shape of the battlefield. Even worse, if your inputs target co-ordinates that aren’t on the grid at all you don’t have a hope of figuring out what’s going on. Of course, in this particular example you could eventually try every single grid co-ordinate but when you’re dealing with the real world that’s not going to be possible.
So figuring out the possible range of inputs and then correctly generating random samples from this range are critical to the modelling process. If you don’t get these right then the model itself is useless, because you’ll either be depth charging submarines off the grid entirely or be continuously carpet bombing empty seascapes.
Monte Carlo Finances
Now generally what physicists first do intelligently economists copy hopefully. So when back in '64 David Hertz suggested that Monte Carlo analysis could be used to model business problems this resulted in lots of excited financiers developing nice models. In particular they started developing models of risk.
Unfortunately once you give an analyst an analytical tool they’re highly likely to start using it in inappropriate ways. Monte Carlo analysis, done properly, is darned difficult when you’re applying it to real-world situations, especially those involving irrational humanity, and shouldn’t be used unless there’s no other option. As David Nawrocki comments:
“Essentially Monte Carlo simulation is useful when nothing else will work”.Which, it turns out, means basically that it’s pointless using it to try and analyse financial market returns. As Harold Evensky explains in Heading for Disaster:
“I don’t pick on Monte Carlo simulation because I think it’s a bad tool. It’s a wonderful tool. Monte Carlo simulation is an effective way of educating people regarding the uncertainty of risks. Unfortunately, it’s not nearly the panacea that is suggested by some commentators. Rather than reducing uncertainty, Monte Carlo simulation increases the guesswork manyfold.”Uncertainty, Again
And why is this? It’s our old friend, uncertainty:
“The problem is the confusion of risk with uncertainty. Risk assumes knowledge of the distribution of future outcomes (i.e., the input to the Monte Carlo simulation). Uncertainty or ambiguity describes a world (our world) in which the shape and location of the distribution is open to question. Contrary to academic orthodoxy, the distribution of U.S. stock market returns is far from normal.”Yep, we don’t know the shape of the input, so our “random” data are skewed to our implicit beliefs. In fact most of the time we’re better off without these models, because they simply provide false certainty about impossible to model risks. So if someone offers you investing advice based on such a model your best bet is to go find a different rune reader, because you're as likely to go bust as make your fortune taking it.
Lise Meitner’s Prize
Of course Lise Meitner didn’t use or require Monte Carlo analysis but instead did her own heavy lifting to figure out the physics behind nuclear fission. Her key role in this, and the chain reaction of events that ended in Hiroshima and the Cold War, is oft-overlooked – and she’d be pleased about that, for she refused to work on the A-bomb research of the Manhattan Project.
In the aftermath of the Second World War a Nobel Prize for the discovery of nuclear fission was awarded – to Otto Hahn alone. Shamefully, Lise Meitner’s role was ignored and the omission never corrected. Still, if we study carefully the upper reaches of the Periodic Table, in the transuranic elements that Fermi got the Nobel Prize for not discovering, it becomes apparent that her contribution to science has, quite rightly, not been entirely forgotten.
Related articles: The Lottery of Stock Picking, Risky Bankers Need Swiss Cheese, Not VaR, Physics Risk Isn't Market Uncertainty
Contrary to academic orthodoxy, the distribution of U.S. stock market returns is far from normal.”
ReplyDeleteMy guess is that this conclusion is based on an analysis that does not adjust for valuations. It was probably either done before the Efficient Market Theory was discredited or it was done by someone who still has a lingering belief on the Efficient Market Theory.
We should NOT expect returns to fall in a normal distribution now that we know that valuations affect long-term returns. But we SHOULD expect returns to fall in a normal distribution around the return predicted by a valuation-informed analysis.
If you know of any research showing that returns do not fall in a normal distribution around the return predicted by a valuation-informed analysis, I would be grateful if you would let me know about it, Tim.
Rob
Rob,
ReplyDeleteI am not sure if I understand your comment exactly but may I suggest "The (Mis)behavior of Markets" by the late Benoit Mandelbrot. He lays out the case against such distributions fairly well.
Best,
Jim
I am not sure if I understand your comment exactly
ReplyDeleteI'll try to explain a bit more carefully what I am getting at, Jim.
If the market were efficient, stocks would always be priced properly and the most likely return each year would be 6.5 percent real (the average long-term return). We would expect returns to fall in a normal distribution with 6.5 the mid-point.
As Tim point out, returns do NOT fall in a normal distribution around 6.5.
But we now know that the market is NOT efficient. Thus, it was always an unrealistic expectation to think that returns would fall in a normal distribution around 6.5 percent real. What returns should be doing is falling in a normal distribution around the mid-point return for each valuation level as predicted by a regression analysis of the historical data.
For example, the most likely annualized 10-year return at the valuation level that applied in 1982 was 15 percent real. The most likely annualized 10-year return at the valuation level that applied in 2000 was a negative 1 percent real.
It obviously is not realistic to expect returns to fall in a normal distribution when the valuation starting points differ by so much. What we should be checking is whether the deviations from the most likely 10-year return fall in a normal pattern or not.
If the annualized 10-year return starting from 1982 was 13 percent real, that's 2 points less than the most likely number (as predicted by a valuations-based analysis). If the annualized 10-year return starting from 2000 was a positive 1 percent real, that's 2 points better than the most likely number (as predicted by a valuations-based analysis). Those two results (if they had been real and not hypotheticals that I am putting forward here only for discussion purposes) would balance each other out and you would have a normal distribution around the most likely valuation-adjusted result.
I do not know whether results fall in a normal pattern or not. I do not know whether anyone has ever checked this. I think it would be a good thing for someone to check (I do not have the statistics skills to do the job). It would make sense to me for valuation-adjusted numbers to fall in a normal distribution whereas I am not able to see how non-valuation-adjusted returns could do so.
I am not surprised that the non-valuation-adjusted returns do not fall in a normal distribution pattern. I do not think that we ever should have expected them to do so. I tend to think that the only reason why anyone ever saw that as a realistic possibility is that there was once a large number of people who believed in the Efficient Market Theory.
Thanks for your recommendation of the "The Misbehavior of Markets" book. That one is on my list. I don't know whether it addresses this question or not. I hope that somewhere down the line I will be able to hook up with some statistics-minded person who will be able to check this out for me (and for any others interested in this particular aspect of the question). I personally find it a fascinating little puzzle.
Rob
Rob
ReplyDeletetake a look at John Hussman's work. i think this is what you are getting at. he has discussed this issue in several of his weekly posts
http://www.hussmanfunds.com/wmc/wmc080630.htm
Thanks for helping me out, Anonynous.
ReplyDeleteI will go take a look at that link.
Rob
Valuation clearly matters, but market returns are definitively not a normal distribution.
ReplyDeleteCompared to a normal distribution, they appear to
1) Exhibit high kurtosis - or 'fat tails'. Extreme events are much, much, more common than suggested by a standard distribution.
http://en.wikipedia.org/wiki/Kurtosis_risk
2) Be heteroskedastic - or exhibit different standard deviations (or variance) at different times.
http://www.riskglossary.com/link/heteroskedasticity.htm
3) Have negative skew - or the most extreme events are heavily biased to the downside, rather than the upside. Think 'black swans'.
http://www.riskmetrics.com/on_the_whiteboard/20090915
All 3 of these factors are just a technical way of saying that markets are riskier and harder to predict than anyone thinks. Modelling markets as a normal distribution is a good way to crash your hedge fund.