PsyFi Search

Wednesday, 1 December 2010

Trading On The Titanic Effect

Scary Tactics

If you’re sailing icy seas you’d generally want to keep a watchful eye open for icebergs. Unless, of course, you’re in an allegedly unsinkable ship, in which case you’d probably prefer to opt for a spot of partying and an early snooze on the poop deck instead. The craft’s designers will likely not have bothered with wasteful luxury items like lifebelts, emergency flares or lifeboats either: what would be the point?

You have, of course, just fallen foul of the Titanic Effect, one of a number of self-fulfilling behavioural biases where your expectations bias your behaviour and make it more likely that you’ll fall foul of the very problems you think you’ve overcome. Oddly, though, the problem suggests the solution: scare the living daylights out of the crew before you cast off.

Airbus Behavior

The calamity that was the sinking of the Titanic is, of course, the obvious example of such behaviour but there are plenty of others. Take, for instance, the Airbus 320. This aircraft, which is the same one that Chelsey Sullenberger successfully brought down in the Hudson River in 2009, is essentially a flying robot. It comes with a range of safety features that pilots hate because they take control out of their hands. The designers justify this by pointing out that the majority of air crashes occur not because the plane develops a fault but because the pilots make a mistake.

Yet, despite this, the safety record of the Airbus is no better than that of its main rival, Boeing. The reason for this, it appears, is that pilots learn to use the safety features to replace their own skills, not to supplement them. The classic example of this was the first Airbus crash in 1982, at an air show in France. The aircraft behaved exactly as designed, it was just that the pilot didn’t. The whole sorry, and tragic, story is laid out in the accident report.

Belts and Braces

The Titanic Effect can occur at many levels, but the most dangerous thing about it is that the more you believe something can’t go wrong the worse the eventual disaster is. As Nancy Leveson has explained, the reasoning behind the idea that the Titanic wasn't stupid, just inadequate:
"Certain assumptions were made in the analysis that did not hold in practice. For example, the ship was built to stay afloat if four or less of the sixteen water-tight compartments (spaces below the waterline) were flooded. Previously, there had never been an incident where more than four compartments of a ship were damaged so this assumption was considered reasonable. Unfortunately, the iceberg ruptured five spaces.

It can be argued that the assumptions were the best possible given the state of knowledge at that time. The mistake was in placing too much faith in the assumptions and the models and not taking measures in case they were incorrect (like the added cost of putting on-board an adequate number of lifeboats)".
This type of self-confident belief in their own creations is the type of thing that leads software designers to remove redundant checks from their systems because “they can never happen”. After all, what’s the point in having belts and braces? Whadda yuh mean, I wasn’t supposed to go on a diet …?

Automated Disaster

However, where risk management and safety is concerned the idea behind the Titanic Effect is utterly perverse, because it implies that beyond a certain point adding further safety measures to a system doesn’t improve safety but actually reduces it. As we can't possibly cover all bases and there will always be some residual risk the more relaxed users are the greater the threat. The fact that the designer often believes they’ve conquered any problems merely shows you can take the the equations out of the fools but you can’t take the fools out of the equation.

If we delegate responsibility for our decisions to an automated system we add the risk of that system going wrong to the list of problems that can afflict us. Overall we may reduce the total risk, but there’s no guarantee, and we may obfuscate rather than clarify. In systems in which humans play a safety critical part getting them confused is a very dangerous thing.

Now, of course, we all get the point that the Titanic Effect is exactly what has engulfed parts of the securities industry over the past few years – software designers who felt their code was infallible, risk managers who believed their systems were omniscient, executives who were partying as the icebergs loomed and regulators who were dozing on the upper decks even as water was gushing in below them. When disaster struck it was a case of "the passengers always go down with their ship" as the captain and crew rushed for one of the few available lifeboats.

All of which may well be a reasonable approximation to the truth but it doesn’t immediately lead us anywhere useful: how do we start to address these issues, issues which appear to be an off-shoot of a combination of human behavioural tendencies and increasing technical complexity? Not through more automated risk management systems, to be sure: taking strychnine to cure cyanide poisoning is rarely advisable.

Confounding Compliance

The solution recommended by bean counters everywhere is to wrap people in compliance red-tape, to try to make sure that they don’t have any accidents – mainly, it seems, by making sure they don’t do anything. As this report concludes, increased regulation:
"Do[es] not cure the underlying problem of a culture that rewards high level of risk taking with correspondingly high rewards".
It’s something we’ve seen a lot of in Europe recently as governments and regulators attempt to impose rules to prevent behaviours they don’t like: which mainly seem to be targeted at those organisations they’ve already taken against, regardless of whether they’re actually to blame for anything or not – private equity and hedge funds being prime examples. The fund managers have accepted this in their normal, calm way; by moving to Switzerland.

A better solution is to consider the lessons of the Titanic Effect: people get complacent when they believe that risks are under control without them needing to bother. The designer of the Airbus has remarked that they should have made the aircraft harder to fly:
"In a difficult airplane the crews may stay more alert. Sometimes I wonder if we should have built a kicker into the pilots's seats".
(Bernard Ziegler, Fly By Wire)

Mistaken, By Design

If the pilots knew that the thing could crash they’d be more likely to attend to the job at hand. The real route to safer systems is to make sure that they’re not safe at all without human intervention: which is always true anyway, but oft-times needs positive reinforcement.

Consider a financial risk management system that every so often deliberately makes a mistake which, if not spotted, will deliver a nasty, although far from terminal, loss to the traders and the company involved. Of course, this means that from time to time such a problem will creep through but equally it means that the chances of people falling asleep at the controls and missing the fact that there’s a bloody great iceberg about to cut them in two is vastly reduced.

Why The Lifeboats?

It is, of course, perverse to argue that we can reduce the risks of accidents by increasing their likelihood, but this is exactly what the evidence suggests. Essentially we always pay for risk-taking behaviours, but we can either do it by making small down-payments along the way or through a whopping great huge one at the end. Such occasional failures would show up those people who weren’t taking sufficient care and signal to risk managers who were the real risks in their organisations.

Even better, if the systems occasionally and deliberately throw an error it’s much more likely that when they occasionally produce one that isn’t intended that someone will pick it up before it reduces the trading systems of the world’s biggest stock exchange to a heap of gibbering binary bits and pieces. If the Titanic had had the luck to hit a small iceberg rather than a giant one it, and its passengers, would probably have lived to sail another day. If you’re going to fail, do it quickly and do it controllably. And, above all, ponder this: if the Titanic was unsinkable why did it have any lifeboats at all?


Related articles: Basel, Faulty?, Risk, Reality and Richard Feynman, Fall of the Machines

6 comments:

  1. how do we start to address these issues

    I think that one of the problems with the Titanic may have been its bigness. The general idea is that the bigger a boat is, the safer it is. It's true to a point. But when a big boat gets things wrong, a lot of people suffer. When a small boat gets things wrong, only a small number of people suffer. So there is much more room for experimentation and flexibility in a small boat.

    Re today's economic crisis, I see the big idea as Buy-and-Hold. People think: "There are so many people who endorse this that it must be right -- it is the safe way to go!" But if a society only permits discussion of One Right Idea re investing, the entire society goes down if that idea is proven faulty.

    I favor free speech -- we should permit the discussion of as many ideas as people can come up with. Some will end up being knuckleheaded. No matter. There will only be a few people who buy into the knuckleheaded ideas so long as there are so many alternatives, so little damage will be done. The best ideas will win more advocates over time, but so long as the discussion of alternatives remains possible, no idea will ever become as dominant as Buy-and-Hold did in the years leading up to our economic crisis.

    It's the strategy of putting all our hopes in the safety of that One Really Big Boat that is sinking us, in my assessment.

    Rob

    ReplyDelete
  2. Joe Lewis loses $1 billin in Bear Stearns collapse - 2008

    ReplyDelete
  3. IIRC, the Titanics lifeboats were intended to transport people to other ships in case it was disabled. This was the standard at the time.

    ReplyDelete
  4. Ah yes the Peltzman effect.

    http://en.wikipedia.org/wiki/Peltzman_effect

    ReplyDelete
  5. Hi Jim

    Good comment, although the Peltzman Effect and the Titanic Effect are actually different. The Peltzman Effect is essentially Moral Hazard writ large, and is mainly about regulation: as regulators introduce new regulations people behave in ways that offset them. The Titanic Effect is about technological advances and how risk taking increases as safety measures increase.

    It has to be said that these are horribly similar. This paper by John Adams is the only one I can find online that describes the differences, if this helps.

    ReplyDelete
  6. Nice paper. Thank you Tim!!

    ReplyDelete