PsyFi Search

Wednesday, 12 May 2010

Fall of the Machines

IT Illiterates Beware

A while ago in a piece named “Rise of the Machines” I suggested that automated algorithmic trading systems were a problem waiting to happen. Sadly this was wrong because the problems had already happened. As this article from the Financial Times shows, these systems had been misbehaving even before Wall Street freaked on May 6th 2010.

What’s really worrying, though, is that the FT article poses the question: “…has technology reached the point where machines pose systemic risks if they go berserk?” The answer’s pretty obvious to anyone who’s ever crafted a program more complex than the obligatory “Hlelo wrold” initiation, but perhaps the leaders of the financial world are really IT illiterates? Just in case, here’s a primer.

Crazy Code?

Remarkably the causes of the calamitous drop on Wall Street on 6th May still aren't entirely understood. It's a tad worrying that no one quite knows why the Dow collapsed 1000 points, a gut-wrenching 9.2%, in less than an hour before recovering almost as rapidly. Proctor and Gamble stock fell 37%, Apple 22%, Accenture 99.9% ... the list goes on.

One theory is that it's something to do with the automated trading systems that have increasingly come to dominate markets. The suggestion is that the NYSE's circuit breakers inadvertently caused the problem. Basically when markets start to move suddenly and erratically the NYSE has the power to call a time-out, to allow buyers and sellers to re-establish a common position. However, this had an unexpected side-effect, because the trade-bots live in their own world:
"The rest of the markets are free to trade around us," said NYSE CEO Duncan Neiderauer, "and that's what they did." So, as the NYSE paused for a minute or two at about 2:40 p.m. ET, the off-exchange computers kept searching to execute trades. They hit the best bids still standing, which in many cases were far below the prior price. And in some cases, the off-exchange computers found no bids at all. When that happens, market-making computers see a zero bid, then offer a penny higher to capture the trade and collect a commission -- hence the trades of just one cent for several stocks ...
Put a human in an unexpected position and they may use their judgement to deal with it. Surprise a bot and, much like a politician faced with irrefutable evidence they've made a mistake, it'll just carry on regardless.

Hole Punchers, Code Monkeys

Software development is probably the most complex thing that a human being does on an industrial scale. Although we’ve moved on from the days when programmers could read punched taped programs by eye and bug fix using a hole punch, the range of sophisticated tools now available to help them have only served to increase program complexity and, hence, the numbers of ways they can go wrong.

By and large, though, there are two sets of issues that can arise. Either the programs contain bugs, accidental errors introduced by the programmers, or they're not built to cope with changes in their environment: in essence they inaccurately model the real-world so that when it misbehaves so do they. This seems to have been the problem during the Dow's drop: the automated trading bots had no concept of a trading time-out and there was no way of stopping them.

Buggy Software

In fact software's hard enough to get right anyway. As Watts S. Humphry points out, exhaustive testing of programs is virtually impossible:
"To judge the practicality of doing this, I examined a small program of 59 LOC. [Lines of Code] I found that an exhaustive test would involve a total of 67 test cases, 368 path tests, and a total of 65,536 data values. And this was just for a 59 LOC program. This is obviously impractical"
Put in context, most complex computer programs stretch into hundreds of thousands of lines of code. If it's impossible to fully test one that's fifty nine lines long you can begin to understand why it's dangerous to rely 100% on software for anything.

To Design a Swing Ask a Child

Problems can occur at any of the various stages of software development leading to this famous cartoon about software development. Sometimes it’s a wonder that anything works, to be honest. This (in)famous Wired article provides a grimace making list of software failures.

Software is used, however, because it’s flexible and allows the creation of solutions that otherwise wouldn’t be possible. Most of the time it’s not unreasonable to expect a few bugs and the odd problem and decent software companies have processes in place to manage this. In a few situations, though, this isn’t acceptable. So-called mission critical systems can’t – mustn’t – go wrong. Think, for example, of the avionics computers that control fly-by-wire aircraft or the control systems in nuclear power stations. These are not places you want random bugs unexpectedly occurring unless you're especially enamoured of having your limbs rearranged randomly.

Because of this these types of solutions require complex processes and designs to avoid possible failures. For instance, aircraft computer systems will often have multiple implementations of the software, which negotiate amongst themselves to overcome discrepancies. Testing and approval processes are, understandably, rigorous and slow – one of the reasons it takes a long time to approve a new aircraft. In many other industries, time to market is more important than software bugs and so these creep in, and are often expected in live systems. Sometimes this matters and sometimes it doesn’t.

Trading Bugs

However, what about a situation where a badly written and designed piece of software could bring down a global system that millions of people depend upon? What if a weakly tested algorithmic trading program managed to crash the complex computers upon which the global economy relies? As the FT article quoted at the start of this article relates, such problems do occur:
"The case was made public only last month when the disciplinary board of the NYSE fined Credit Suisse for failing adequately to supervise an "algorithm" developed and run by its proprietary trading arm - the desk that trades using the bank's own money rather than clients' funds.

Algorithms have become a common feature of trading, not only in shares but in derivatives such as options and futures. Essentially software programs, they decide when, how and where to trade certain financial instruments without the need for any human intervention. But in the Credit Suisse case the NYSE found that the incoming messages referred to orders that, although previously generated by the algorithm, were never actually sent "due to an unforeseen programming issue"."
As we’ve seen “unforeseen programming issues” – aka bugs – are part and parcel of software development. How many and how serious these are is dependent on the time and rigour of the quality control processes. It’s not realistic to design systems with no bugs, but it is possible to create ones that fail safe. These are commercial issues, not technical ones.

Algorithmic, Systemic Risk

As regards financial systems like algorithmic trading algorithms it’s a moot point whether these should be considered as mission critical or not, but it’s clearly the case that they could pose a systemic risk to trading systems if they’re badly implemented. Recent issues simply show the kind of impact on market confidence that bots going off on a bender can cause: investors simply hate uncertainty and having these unexplained problems happening while everyone's nervous anyway does nothing to calm the markets.

So let’s be clear: if one of these systems really goes haywire and causes some kind of domino effect across global markets it’s no point in the people responsible for overseeing and regulating these systems simply shrugging their shoulders and saying it was inevitable. It’s not. It’s just difficult, expensive and an obstacle to innovation in trading systems which no individual market participant can afford to make on their own in a competitive environment.

In the end, though, exactly how much innovation in automated trading systems do we really need?


Related Articles: Investment Forecasts: Known Unknowns, Quibbles with Quants, Rise of the Machines

2 comments:

  1. Automated trading systems are still as immature as those featured in the Wired article. Things will improve over the next decade (scary thought 1). There will always be bugs though, and sometime those bugs will conspire to overcome the checks and balances (scary thought 2).

    "However, what about a situation where a badly written and designed piece of software could bring down a global system that millions of people depend upon?"

    Did it make a profit for its owners?

    What's my incentive for testing my code beyond "does it return a profit?"

    ReplyDelete
  2. As a baltimore financial planner, I can tell you that the wild ride on Wall Street that day created a lot of doubt in my mind about the growing role of computer systems in trading. Is every piece of software susceptible to bugs? Sure. But, in my opinion, the greatest concern is the motivations of the people creating such software. So, maintaining the integrity and security of good software should be the focus.

    ReplyDelete