Nobel Games
Game Theory is a theory of human decision making, and one that’s very popular in the dismal science: no less than thirteen of the forty four Nobel Prizes in economics have been awarded to practitioners in the area. And, indeed, Game Theory is a powerful tool for researchers in many fields, the only problem being that if you give a man a powerful tool they’re likely to want to wave it around and use it on everything, regardless of taste and applicability.
Thus we find that Game Theory and behavioral economics collide in odd ways, which turns out not to surprising as the former is built on the foundations of the standard economic approaches. Even so, a basic appreciation of the mysterious workings of economic gamers is an essential part of any investor’s kitbag of mental models.
Game Theory has a long history – we can find it in the writings of Pliny the Younger, nearly two thousand years ago, who found himself in the minority group during a trial: his group wanted to acquit, a second group wanted exile and the third, largest group, wanted execution. By aligning themselves with the group favoring exile Pliny’s party could avoid their worst outcome – death – at the cost of giving up their best – freedom. However, the formalization of Game Theory dates from 1928, and the work of a young Johnny von Neumann, whom we last saw designing computers as part of the Manhattan Project (see: Monte Carlo Simulation or Nuclear Bust).
Game Theory is, in essence, about how we make decisions. The Games themselves are an array of strategic situations, which are more or less representative of situations we might find ourselves in in real life. Game Theory itself is a highly mathematical approach to figuring out what people will actually do when they find themselves playing a Game; so mathematical, in fact, that it took someone as clever as von Neumann to formalise it. And von Neumann was staggeringly clever.
Nashing It Up
In his original paper on the subject he demonstrated that in a simple two-player, zero-sum game (i.e. there’s no overall benefit, but if one players wins then the other must lose) there is always a strategy for each player, given the way their opponent behaves, that minimizes the opponent’s gains and maximizes the player’s losses: the minimax theorem, it’s called. Famously John Nash took this idea and, via a mental breakdown, developed it into the concept of a Nash equilibrium – which states that if each player is making the best decision they can given the decisions of the other players then no one should have any interest or can derive any benefit from changing their decision.
In any given situation there may be multiple Nash equilibriums. A classic example is deciding which side of the road to drive on. If we all drive on the right side of the road then we minimize the chances of colliding with someone coming the other way. Ditto if we all drive on the left. Hence there are two Nash equilibriums. Rationally we should all agree on our approach and then implement it; and of course, absent the odd drunk and jetlagged American tourists leaving British airports, we do. Game Theory isn’t purely a theoretical wander down mental dead-ends.
Omerta
However, the Nash equilibrium in any given situation is not necessarily the best possible decision for each individual. The famous Prisoner’s Dilemma offers us an example of this. In the dilemma two prisoners are separately offered a choice: if they both choose to stay silent they both get a short sentence; if one testifies against the other they go free and their partner gets a long sentence; if they both squeal they get a middling sentence. The Nash equilibrium in this case is for both parties to squeal – because this minimizes their losses and offers the possibility of the best outcome.
The problem with this is the best approach is for both to say nothing – but this opens up the risk that in the absence of any external co-ordinating factor, like a Mafia boss with a long reach and a nasty set of sidekicks, your co-criminal breaks the code of silence and leaves you festering in prison for a long time. All of which leads us neatly into why Game Theory isn’t quite the universal tool that you might be forgiven for thinking it might be.
Nailed
There are some clues in the prior narrative – the idea that people are rational, the concept that we’re aiming at an equilibrium and the extreme focus on mathematical reasoning, often to the exclusion of any actual data, all topics we've looked at in the past. As Herbert Gintis states in The Bounds of Reason, Game Theory and the Unification of the Behavioral Sciences:
“The reigning culture in game theory asserts the sufficiency of game theory, allowing game theorists to do social theory without regard for either the facts or the theoretical contributions of the other social sciences. Only the feudal structure of the behavioral disciplines could possibly permit the persistence of such a manifestly absurd notion in a group of intelligent and open-minded scientists. Game theorists act like the proverbial “man with a hammer” for whom “all problems look like nails.””
Gintis also suggests that the costs of reasoning may be so heavy that, if we are rational, we don’t try and learn complex strategies for confusing situations but simply copy people who are, or who appear to be, successful: which leads to an alternative take on rationality in which herding behaviour in investors or others is a perfectly sensible extension of a natural process. Which further leads to the idea that this isn't something markets can ever eliminate.
Behavioral Game Theory
Despite these reservations Game Theory is still a powerful tool, offering a unique way of analysing human decision making, and it’s not surprising that there are attempts underway to unify it with behavioral economics. Colin Camerer has been at the heart of many of these, attempting to weld ideas about bounded rationality to the theorising of theorists. In his book on Behavioral Game Theory Camerer picks up dozens of examples of real-life gaming behaviour in an attempt to develop a theory based upon what people actually do, rather than what the theorists expect.
A possible synergy suggests itself in Camerer’s paper with Teck-Hua Ho and Juin Kuan Chong on Behavioral Game Theory: Thinking, Learning and Teaching. The idea is that players do indeed eventually come to some form of equilibrium, but that this isn’t a quick process: it’s as though we play the Prisoner’s Dilemma not once, but over and over again. And out of this comes the odd, but highly intriguing idea that irrational people subjected to market conditions may arrive at fairly rational outcomes: an idea we'll revisit soon.
Chickens and Psychopaths
Perhaps the key issue for Game Theory is that it only works if everyone playing the Game shares the same beliefs. Playing chicken with a psychopath is a no-win situation, but within the same culture and society we do often share the same values and expectations (see: Is Your CEO A Psychopath?). The concept of a market economy is just a set of principles of this nature but that’s not the same as aiming for a rational, economic equilibrium: we often obey social norms even when it makes no sense to do so in economic terms.
Trying to work our way through this muddle involves multiple disciplines – economics and Game Theory, psychology and behavioral finance plus sociology and even elements of evolutionary biology as well. All of these disciplines have theories of human decision making, and they’re all different. Unfortunately until, or unless, we start to integrate these different fields of knowledge we’ll continue to develop very clever and mostly useless economic theories.
Thank you for a great article about game theory, this has broadened my understanding of it.
ReplyDelete