The St. Petersburg Paradox
An economic problem posed over 250 years ago still causes angst for economists, investors and actuaries today; largely because it’s never been satisfactorily solved, probably because it can't be. Despite this, the resolution posed all those years ago has travelled down the ages, insinuating itself into every nook, cranny and other archaic crevice of finance you care to mention, like an elasticated thong on an overweight man.
The problem, posed by Nicholas Bernoulli, is known as the St. Petersburg Paradox after the city of residence of his cousin, Daniel Bernoulli, who was the first person to propose an answer to it. Daniel’s great idea – and it was a truly great idea – is known as utility. Without the concept of utility it’s doubtful that any of us would be here to debate this issue, yet the findings of behavioural finance show that it’s almost certainly wrong. The Paradox of St. Petersburg has plenty of life left in it yet.
Expected Value of a Gamble
It was Blaise Pascal who, along with Pierre de Fermat, figured out how to calculate probabilities of future outcomes, then went on to pose Pascal's Wager and eventually decided that the small chance of a terrible outcome, an eternity in Hell, was worth foregoing future Earthly pleasures for. This was a step along the way to forecasting and thence to decision making but failed to take into account variability in human risk taking. The probabilities of the numbers on the the roll of a dice may be fixed, but the willingness of individuals to bet on these aren't.
Then Nicholas Bernoulli came up with the St. Petersburg Paradox, a position that stands in stark contradiction to Pascal's Wager, arguing that there is no limit to the amount you should be prepared to gamble in quest of a greater fortune. The paradox, simply stated, envisages two gamblers betting on the toss of a coin. On the first toss if it lands on tails the winner gets $2, on the second a tail yields $4, on the third $8 and so on, forever. If you calculate the risk weighted probabilities your likely winnings increase by a dollar with every tail tossed, all the way to infinity.
Thus what the St. Petersburg Paradox argues is that the value to the gambler of the coin flipping game is infinite: it’s like owning a cash machine that continually replenishes itself. However, the paradox goes further – it asks that, if the value of the gamble is infinite what is the maximum stake that a rational person should be willing to bet on it? How much money is it worth paying for a cash machine that never runs out of money? The rational answer is, of course, that there is no maximum: any amount is less than the value of the machine.
Rational Madness
Although the outcome of the St. Petersburg Paradox is arrived at through the definition of economic rationality no normal human being would entertain such a gamble. In fact, the most the average person will consider staking on such a bet is about $20, suggesting that most people aren’t prepared to wager on a tail turning up more than five times in a row, whatever probability analysis might suggest.
So this, in a nutshell, is the conundrum at the heart of modern economics: the idea that a gamble, supposedly worth staking any amount on you can care to mention up to just short of infinity, is actually only worth about $20. Somehow you get the feeling that there’s an eternal verity out there holding its head and wondering what it’s done to deserve this.
Daniel Bernoulli's Utility
The classical solution to the paradox was proposed by Daniel Bernoulli who argued that the problem was that a dollar isn’t a dollar for everyone. What he suggested was that the value of any given amount of money reduced as you became richer – so if you only have $1 then an extra dollar has a huge extra benefit, but if you have $1,000,000,000 then an extra dollar is neither here nor there. In fact it’s probably under the sofa or in the dog. He called this relative value ‘utility’.
The point is that as you win more on each flip of tails the expected value – the utility of the additional money – supposedly becomes less and less. There comes a point where the chances of losing on the next coin flip outweighs the allegedly infinite money that lies ahead. We simply don’t need any more cash so we stop: the St. Petersburg paradox is resolved because we don’t need to go on any further. And, of course, this is a personal matter – different people will have different cut-off points as anyone who's watched Who Wants to be a Millionaire can testify.
Omnipresent Utility
Only, to be honest this doesn’t seem right. If I’m on the crest of an infinite money generator then the difference between $100 million and $200 million may not appear to offer me much more utility but frankly I’m sure my kids can use it, or my wife’s lawyers.
This idea of utility, this indefinable but economically critical “stuff”, permeates economics, appearing in virtually every major piece of economic thinking over the last couple of centuries. This paper by Tibor Neugebauer gives a feeling for the centrality of the idea. Hell, even Prospect Theory, one of the fundamental legs of behavioural finance, assumes it.
Like the Æther
Utility’s reminiscent of the æther, the invisible matter that was once supposed to fill the gaps between planets. Although no one could prove whether it existed or not it was required scientifically to maintain the illusion of a mechanical universe. Unfortunately it turned out to a figment of scientists' imaginations. Like the æther, no one quite knows what utility is or how to measure it but it’s needed to make economic models work. It’s the deus ex machina of economics, the solution to every problem that economists can’t actually find an answer to. If it turned out that utility, like the æther, doesn’t exist it would send economics into a tail-spin.
Despite these issues Bernoulli’s idea brought the idea of the human risk-taker into the equation of risk for the first time. It’s not just an issue of whether the gamble is “logical” or not – and if you’ve read this carefully you’ll probably be wondering if “logical” makes any sense any more – it’s about whether people behave as the idea of utility suggests they do. The evidence, hard though it is to believe, is that they do – roughly. If they didn’t insurance wouldn’t work, most of the time.
If this sort of sounds vaguely familiar then you’re probably thinking about the arguments for dotcom stocks back in the late nineteen nineties. These were viewed as perpetual cash machines, so there was no price too high for them. Until real-life reasserted itself, of course. Exactly this idea was analysed by David Durand in Growth Stocks and the Petersburg Paradox back in the nineteen fifties during a previous bout of investor over exurberance:
That Daniel Bernoulli was probably wrong about utility shouldn’t obscure the sheer genius and importance of his idea. For the first time risk management was able to include the idea of the risk taker in the evaluation of odds, and that extended the concept of risk way beyond anything we’d previously been able to do. It enabled humans to measure the risk of economic investments and rationally insure against the unknown and in so doing it opened the door to a whole new world of calculated risk taking and extraordinary economic growth.
If Bernoulli was wrong he was wrong in the same way that Newton was wrong about gravity by developing a theory that roughly predicted what actually happens without understanding why it does so. Yet sometimes an approximation is more than good enough. On 17th April 1970 the pilots of Apollo 13 used Newton's laws to throw their crippled craft in a slingshot around the Moon, and bring them home across 250,000 miles of ætherless space to a pin-point touchdown. With Bernoulli’s utility we’ve done that every day for the best part of three centuries; it’s one hell of an epitaph for something that probably doesn’t exist.
Related Articles: Pascal's Wager - For Richer, For Poorer, Mandelbrot's Mad Markets, Markowitz's Portfolio Theory and the Efficient Frontier
An economic problem posed over 250 years ago still causes angst for economists, investors and actuaries today; largely because it’s never been satisfactorily solved, probably because it can't be. Despite this, the resolution posed all those years ago has travelled down the ages, insinuating itself into every nook, cranny and other archaic crevice of finance you care to mention, like an elasticated thong on an overweight man.
The problem, posed by Nicholas Bernoulli, is known as the St. Petersburg Paradox after the city of residence of his cousin, Daniel Bernoulli, who was the first person to propose an answer to it. Daniel’s great idea – and it was a truly great idea – is known as utility. Without the concept of utility it’s doubtful that any of us would be here to debate this issue, yet the findings of behavioural finance show that it’s almost certainly wrong. The Paradox of St. Petersburg has plenty of life left in it yet.
Expected Value of a Gamble
It was Blaise Pascal who, along with Pierre de Fermat, figured out how to calculate probabilities of future outcomes, then went on to pose Pascal's Wager and eventually decided that the small chance of a terrible outcome, an eternity in Hell, was worth foregoing future Earthly pleasures for. This was a step along the way to forecasting and thence to decision making but failed to take into account variability in human risk taking. The probabilities of the numbers on the the roll of a dice may be fixed, but the willingness of individuals to bet on these aren't.
Then Nicholas Bernoulli came up with the St. Petersburg Paradox, a position that stands in stark contradiction to Pascal's Wager, arguing that there is no limit to the amount you should be prepared to gamble in quest of a greater fortune. The paradox, simply stated, envisages two gamblers betting on the toss of a coin. On the first toss if it lands on tails the winner gets $2, on the second a tail yields $4, on the third $8 and so on, forever. If you calculate the risk weighted probabilities your likely winnings increase by a dollar with every tail tossed, all the way to infinity.
Thus what the St. Petersburg Paradox argues is that the value to the gambler of the coin flipping game is infinite: it’s like owning a cash machine that continually replenishes itself. However, the paradox goes further – it asks that, if the value of the gamble is infinite what is the maximum stake that a rational person should be willing to bet on it? How much money is it worth paying for a cash machine that never runs out of money? The rational answer is, of course, that there is no maximum: any amount is less than the value of the machine.
Rational Madness
Although the outcome of the St. Petersburg Paradox is arrived at through the definition of economic rationality no normal human being would entertain such a gamble. In fact, the most the average person will consider staking on such a bet is about $20, suggesting that most people aren’t prepared to wager on a tail turning up more than five times in a row, whatever probability analysis might suggest.
So this, in a nutshell, is the conundrum at the heart of modern economics: the idea that a gamble, supposedly worth staking any amount on you can care to mention up to just short of infinity, is actually only worth about $20. Somehow you get the feeling that there’s an eternal verity out there holding its head and wondering what it’s done to deserve this.
Daniel Bernoulli's Utility
The classical solution to the paradox was proposed by Daniel Bernoulli who argued that the problem was that a dollar isn’t a dollar for everyone. What he suggested was that the value of any given amount of money reduced as you became richer – so if you only have $1 then an extra dollar has a huge extra benefit, but if you have $1,000,000,000 then an extra dollar is neither here nor there. In fact it’s probably under the sofa or in the dog. He called this relative value ‘utility’.
The point is that as you win more on each flip of tails the expected value – the utility of the additional money – supposedly becomes less and less. There comes a point where the chances of losing on the next coin flip outweighs the allegedly infinite money that lies ahead. We simply don’t need any more cash so we stop: the St. Petersburg paradox is resolved because we don’t need to go on any further. And, of course, this is a personal matter – different people will have different cut-off points as anyone who's watched Who Wants to be a Millionaire can testify.
Omnipresent Utility
Only, to be honest this doesn’t seem right. If I’m on the crest of an infinite money generator then the difference between $100 million and $200 million may not appear to offer me much more utility but frankly I’m sure my kids can use it, or my wife’s lawyers.
This idea of utility, this indefinable but economically critical “stuff”, permeates economics, appearing in virtually every major piece of economic thinking over the last couple of centuries. This paper by Tibor Neugebauer gives a feeling for the centrality of the idea. Hell, even Prospect Theory, one of the fundamental legs of behavioural finance, assumes it.
Like the Æther
Utility’s reminiscent of the æther, the invisible matter that was once supposed to fill the gaps between planets. Although no one could prove whether it existed or not it was required scientifically to maintain the illusion of a mechanical universe. Unfortunately it turned out to a figment of scientists' imaginations. Like the æther, no one quite knows what utility is or how to measure it but it’s needed to make economic models work. It’s the deus ex machina of economics, the solution to every problem that economists can’t actually find an answer to. If it turned out that utility, like the æther, doesn’t exist it would send economics into a tail-spin.
Despite these issues Bernoulli’s idea brought the idea of the human risk-taker into the equation of risk for the first time. It’s not just an issue of whether the gamble is “logical” or not – and if you’ve read this carefully you’ll probably be wondering if “logical” makes any sense any more – it’s about whether people behave as the idea of utility suggests they do. The evidence, hard though it is to believe, is that they do – roughly. If they didn’t insurance wouldn’t work, most of the time.
If this sort of sounds vaguely familiar then you’re probably thinking about the arguments for dotcom stocks back in the late nineteen nineties. These were viewed as perpetual cash machines, so there was no price too high for them. Until real-life reasserted itself, of course. Exactly this idea was analysed by David Durand in Growth Stocks and the Petersburg Paradox back in the nineteen fifties during a previous bout of investor over exurberance:
"The very fact that the Petersburg problem has not yielded a unique and generally acceptable solution to more than 200 years of attack by some of the world's great intellects suggests, indeed, that the growth-stock problem offers no prospect of a satisfactory solution".An Approximation to Genius
That Daniel Bernoulli was probably wrong about utility shouldn’t obscure the sheer genius and importance of his idea. For the first time risk management was able to include the idea of the risk taker in the evaluation of odds, and that extended the concept of risk way beyond anything we’d previously been able to do. It enabled humans to measure the risk of economic investments and rationally insure against the unknown and in so doing it opened the door to a whole new world of calculated risk taking and extraordinary economic growth.
If Bernoulli was wrong he was wrong in the same way that Newton was wrong about gravity by developing a theory that roughly predicted what actually happens without understanding why it does so. Yet sometimes an approximation is more than good enough. On 17th April 1970 the pilots of Apollo 13 used Newton's laws to throw their crippled craft in a slingshot around the Moon, and bring them home across 250,000 miles of ætherless space to a pin-point touchdown. With Bernoulli’s utility we’ve done that every day for the best part of three centuries; it’s one hell of an epitaph for something that probably doesn’t exist.
Related Articles: Pascal's Wager - For Richer, For Poorer, Mandelbrot's Mad Markets, Markowitz's Portfolio Theory and the Efficient Frontier
If I’m on the crest of an infinite money generator then the difference between $100 million and $200 million may not appear to offer me much more utility but frankly I’m sure my kids can use it, or my wife’s lawyers.
ReplyDeleteIt seems to me that the missing element in the analysis might be time. I'd rather have $100 today than $100 million delivered 100 million years from today. Maybe that's just me.
A related puzzle that I have pondered is Buffett's explanation of the proper stock price being the discounted value of all future dividends. If you went out long enough, the amount of the future dividends would be infinite and the discounted value of infinity would also be infinity. So I think there has to be a time cut-off to make this sort of thing make sense.
Rob
In reply to your second comment, won't in fact the dividend discounted out to eternity approach closer and closer to zero?
ReplyDeleteSure the amount of future dividends is infinite, but it is also being discounted at an infinite rate, which results in a value which gradually approaches zero.
This is super-interesting.
ReplyDeleteInfinite expected payout, but only worth $20?
Its taken me about an hour, but I've finally solved this to my satisfaction.
I don't know if this is the 'correct' solution, but it seems rational enough, and would tell me how much to wager.
Here's my logic:
1) If its rational to bet a certain amount once, then it is rational to wager this same amount again.
2) A rational investor is averse to permanent loss of capital.
3) If you bet too much, then your 'risk-of-ruin' is quite high - if you keep playing you are likely to be wiped out.
4) The chance of hitting a really high payout before being wiped out is relative to the size of your capital.
5) Therefore, the amount a rational investor should be willing to risk on the St Petersburg game is relative to the size of their capital.
So what I would do is take my total amount of capital, and figure out my risk of ruin (which I will define as losing 50% or more of my money), assuming that I continuously played the game with my capital (but not my winnings). I would then set my maximum allowable bet at a level where my risk of ruin is 5% or less.
Warren Buffett could play this game a few million times more than me, so he could rationally risk a lot more money per bet and still come out ahead.
$20 is too much, unless you have LOTS of money. According to a Monte Carlo simulation using Excel, even if you play the game 100,000 times, you still usually come out behind when risking $20.
ReplyDeleteI would bet about $12. On 1,000 plays, you come out ahead about half of the time, but the game is pleasantly asymetrical since the upside is unlimited.
Sure the amount of future dividends is infinite, but it is also being discounted at an infinite rate, which results in a value which gradually approaches zero.
ReplyDeleteThank, Jason. I've learned not to trust my instinctive responses to math puzzles, but what you are saying sounds right to me.
Rob
Parker, you have the right idea. Here's a paper that lays out a compelling answer to the paradox -- perhaps not "the answer" but a reasonable one.
ReplyDeletehttp://mpra.ub.uni-muenchen.de/5233/1/MPRA_paper_5233.pdf
I haven't thought about it much, but the relevant issue probably boils down to counterparty risk. The casino's ability to pay -- and therefore the number of times I'm allowed to play -- is what determines willingness to pay.
@Rob, if you do the arithmetic, the sum of an infinite series of discounted cashflows converges. In fact, there are / have been traded perpetual bonds: http://en.wikipedia.org/wiki/Perpetuity.
ReplyDelete@Parker & Chad Wassell, I suspect that this post was inspired by my comment to an earlier post that I didn't think that a preference for a risk-free return over a somewhat higher but risky one was self-evidently irrational. I think that timarr sort of rolled his eyes and thought "it's the utility, stupid". But I don't believe that the price of risk has anything to do with utility.
I think the same thing about the St. Petersburg paradox. To interpret it in terms of utility, you need three assumptions:
1. The game is played infinitely fast (or else interest rates are zero.)
2. The counterparty is risk-free for obligations of unbounded size.
3. Rational investors are indifferent to ruin - i.e. the rational price of risk is zero.
In short, the "paradox" is founded on two impossibilities and a wrong assumption.
I try not to do the eye-rolling thing, I’ve got teenage daughters to do that for me :)
ReplyDeleteWhile Phil’s right to focus on the impossibility of actually implementing the paradox I think it’s an understandable approach – to make any progress on complex issues we have to simplify the problem to a point where we can make a start. So I don’t think the fact that it’s not a real problem invalidates the paradox but the fact that most non-economists think it’s a non-problem is itself interesting given that virtually every major economist of the past two hundred years has had a stab at analysing it.
I suspect that the problem is that the simplifying assumption that we’re capable of rationally assessing risk, even in terms of our own utility, doesn’t stand up to even a cursory examination unless we define “rational” in a rather odd way.
but when we can compare activity levels in the prefrontal cortex and NAcc and insula and amygdala, who's to say we can't one day measure 'utility'?
ReplyDelete