PsyFi Search

Wednesday, 18 August 2010

On Incentives, Agency and Aqueducts

Risk Management, Roman Style

There’s an aqueduct in Segovia, in Spain, that’s stood the test of time. A lot of water has flowed over that bridge … two thousand years worth, more or less, since it was built by the Roman Empire. Back then risk management consisted of getting the chief engineer to stand underneath the structure when they removed the supports: now that’s a proper incentive.

Incentives stand at the heart of a lot of human behaviour in corporations but financial theorists have had a great deal of difficulty in understanding that an incentive is not necessarily the same as a financial reward. Although the ideas of psychologists and sociologists are slowly seeping through there’s still a long way to go before there’s a proper appreciation of what motivates people. In the meantime we’re stuck with Agency Theory, the sheer power of grim self-interest: it’s like real life but not as we know it.

Agency in Hammurabi

In fact the idea of making builders stand beneath their constructions when first erected goes back a lot further than the Romans, it can be found in the Code of Hammurabi, from Babylonian times, where the punishment for building a house that fell down was death. To be fair, most crimes in Babylonian times were punishable by death largely, it seems, because of a lack of imagination when it came to various punishments. Forcing offenders to learn about Agency Theory would have been one alternative.

The idea is that there are two parties involved in forming an employment contract (actually any sort of contract, but let’s talk specifics here) – a principle (the employer) and the agent (the employee). Supposedly these two parties have different objectives: roughly, the employer wants to grind the employee into the ground at minimal cost and the employee wants to shirk about while earning as much as possible. It’s the normal economic view of human nature: miserly and miserable.

Monitoring

Digging around within the theory you’ll find lots of stuff about information flow, the point being that the more information that the employer has about the employee then the less chance they have of goofing off – which, of course, is an opportunity for employee monitoring system suppliers everywhere. It’s the academic justification for corporate surveillance of workers.

Roughly two forms of contracts are imagined: one based on behaviour and one based on outcomes. Behaviour based contracts are basically getting paid for turning up for work, outcome based ones are getting executive share option awards for sitting in a big chair. Before you get carried away with this the theory goes on to posit that the greater the uncertainty in the outcome of any job the less likely an employer is to go for an outcome – incentive – based scheme.

Risk and Incentives

This is because the theory predicts that the higher the risk then the greater the cost of the employer shifting that risk onto employees. Which is rational, because if an employer wants me to take the risk of running an operation in some God-forsaken outpost of civilisation (or Basildon, as we call it in England) then I’ll want a tonne of cash for doing so. This means that there’s a trade-off between figuring out how to monitor employees and paying for them to take on risk on the basis of outcome based reward packages. Ultimately, Agency Theory predicts that where it’s difficult to predict outcomes then employers will prefer to avoid incentive based packages.

The only problem with this is that the theory doesn’t really seem to hold. In The Tenuous Trade-off of Risk and Incentives Prendergast has suggested a couple of reasons why this may happen. Firstly, in risky environments the employer may have no idea how to monitor the employee so the employee may be able to make more money for marginal improvements in performance.

Secondly the wide-scale evidence of the existence of outcome based incentives in risky settings is that the monitoring processes tend to go awry in these situations. In fact there’s copious evidence that supervisors provide very poor feedback on employees at the best of times. In unstable and uncertain environments the information they provide is next to useless which, again, forces employers to use performance related pay, and thus increase the potential rewards to employers – who will demand more money for taking on more risk.

The Hell of Self-Interest

This partially goes to explain why firms offer senior employees rewards for outcomes over which they have no control. A general failure to index stock options to the general movements of the markets is one example where luck and a rising market often ensure unearned and undeserved wealth flowing the way of happy executives.

Of course, this view of employer-employee relations is one primarily governed by self-interest, in which all parties are seeking to maximise their financial advantage. This, as Hirsh et al (1987) pointed out, is the predominant paradigm of economics, and a profoundly depressing one to boot. No question, there’s definitely evidence that employees respond to financial incentives, but not quite in the way economists would have predicted. Prendergast gives a range of examples including the one about software programmers incentivised per line of code who wrote inordinately long and complex programs. And then there are football quarterbacks:
“Consider the contract offered to Ken O’Brien, a football quarterback, in the mid-1980’s. Early in his career, he had a tendency to throw the ball to a member of the opposition. As a result, he received a contract that penalized him every time he threw the ball to a member of the opposition. However, while it was the case that he subsequently threw fewer interceptions, this was largely because he refused to throw the ball …”
Give a man an incentive and you’ll trigger a perverse reaction.

Social Identity, No?

The problem with all this incentivisation lark is that it assumes that the preference of the employee is a given: all we care about is self-interest in terms of maximising our rewards and minimising our efforts. Research in other areas, however, suggests that this is a woefully inadequate way of modelling human behaviour. If people try and maximise anything generally it’s self-esteem and that this isn’t a simple matter of personal pride but is a reflection of how a person is viewed by their peer group: your ‘social identity’.

The idea that social identity is a critical element in economics has been a long-term topic of research for Rachel Kranton and George Akerloff, who’ve produced a host of papers analysing the idea in a bunch of different contexts. In Identity and the Economics of Organisations they show how a work group identity – how an individual views themselves in respect of their fellow workers and how the group views itself in comparison to supervisors and employers – can radically impact the amount of effort that employees are willing to exert.

The simple idea that more monitoring leads to more control turns out to be wrong. In fact, if a monitoring supervisor aligns themselves too much with the employer against the employee work group then the antagonism can actually reduce output, regardless of improved monitoring. Employees with a shared social identity can effectively block management attempts to monitor them.

Kranton and Akerloff think that companies should focus less on financial incentives and more on finding ways of embedding the company in the social identity of employees. They argue that loosely supervised employees don’t function: “because the workers maximise the own self-interest; they function because employees wish to fulfil the goals of the organisation”.

Fostering Corporate Identity

Ideally employers should seek to ensure employees have a group identity that conforms with that of the company but that, of course, relies on the company behaving in a way compatible with a social identity that individuals want to take on. A deceitful organisation which says one thing and acts in another isn’t likely to foster a persona anyone wants to be part of.

This, at least, is a rather more hopeful view of the human condition than Agency Theory suggests. In this model the engineer stands under the aqueduct not because they’re compelled to but because they want to, because that’s what people in that situation feel they should do. To whit: in a well-run corporation loyalty is way more effective than agency.


Related articles: Perverse Incentives are Daylight Robbery, Basel, Faulty?, Mayhem with M&A

2 comments:

  1. Just think about what you do with your kids. Do you follow them around every second to make sure that they never do the wrong thing? Or do you try to instill values that will insure that they do not themselves want to do the wrong thing? Which approach works best long-term?

    Rob

    ReplyDelete
  2. Bullseye.

    I once had a similar discussion with a manager who, while accepting that his lines-of-code-style measurements would not be accurate, kept ending with, "But what do I measure?" He was unready to believe that it wasn't what he chose to measure that was the problem but that the very act of measuring would alter behaviour. Perhaps he did not feel he could justify his own position without engaging in such measurement.

    Rob's observation about children is good. I recall Charles Handy making a similar analogy with some examples (but not, offhand, the book).

    ReplyDelete