THIS STORY HAS BEEN FORMATTED FOR EASY PRINTING

Too complex to exist

(Shelby Murphy/Globe Staff Illustration)
By Duncan Watts
June 14, 2009
  • Email|
  • Print|
  • Reprints|
  • |
Text size +

ON AUG. 10, 1996, a single power line in western Oregon brushed a tree and shorted out, triggering a massive cascade of power outages that spread across the western United States. Frantic engineers watched helplessly as the crisis unfolded, leaving nearly 10 million people without electricity. Even after power was restored, they were unable to explain adequately why it had happened, or how they could prevent a similar cascade from happening again - which it did, in the Northeast on Aug. 14, 2003.

Over the past year we have experienced something similar in the financial system: a dramatic and unpredictable cascade of events that has produced the economic equivalent of a global blackout. As governments struggle to fix the crisis, experts have weighed in on the causes of the meltdown, from excess leverage, to lax oversight, to the way executives are paid.

Although these explanations can help account for how individual banks, insurers, and so on got themselves into trouble, they gloss over a larger question: how these institutions collectively managed to put trillions of dollars at risk without being detected. Ultimately, therefore, they fail to address the all-important issue of what can be done to avoid a repeat disaster.

Answering these questions properly requires us to grapple with what is called "systemic risk." Much like the power grid, the financial system is a series of complex, interlocking contingencies. And in such a system, the biggest risk of all - that the system as a whole might fail - is not related in any simple way to the risk profiles of its individual parts. Like a downed tree, the failure of one part of the system can trigger an unpredictable cascade that can propagate throughout the entire system.

It may be true, in fact, that complex networks such as financial systems face an inescapable trade-off - between size and efficiency on one hand, and global stability on the other. Once they have been assembled, in other words, globally interconnected and integrated financial networks just may be too complex to prevent crises like the current one from reoccurring.

Rather than waiting until the next cascade is imminent, and then following the usual modus operandi of propping up the handful of firms that seem to pose the greatest threat, it may be time for a new approach: preventing the system from becoming overly complex in the first place.

To understand why such preventive measures might be useful, it helps to take a step back and notice a general trend toward building ever larger and more complex networks. In recent years, hundreds of millions of people have rushed to join online social networks, while billions more rely on e-mail and cellphones to stay connected to friends and coworkers all day, every day. Technologists wax lyrical about "Metcalfe's Law," which posits that a network's "value" increases in proportion to the square of the number of people or devices in it. And system designers revel in the ability of networks to improve a system's overall efficiency by dynamically distributing computer-processing load, power generation, or financial risk, as the case may be.

In all the excitement, however, we tend to overlook a fact that should be obvious - that once everything is connected, problems can spread as easily as solutions, sometimes more so. Thanks to globally connected transportation systems, epidemics of disease like SARS, avian influenza, and swine flu can spread farther and faster than ever before. Thanks to the Internet, e-mail viruses, nasty rumors, and embarrassing truths can spread to colleagues, loved ones, or even around the world before remedial action can be taken to stop them. And thanks to globally connected financial markets, a drop in real-estate prices in California can hurt the retirement benefits of civil servants in the UK.

Traditionally, banks and other financial institutions have succeeded by managing risk, not avoiding it. But as the world has become increasingly connected, their task has become exponentially more difficult. To see why, it's helpful to think about power grids again: engineers can reliably assess the risk that any single power line or generator will fail under some given set of conditions; but once a cascade starts, it's difficult to know what those conditions will be - because they can change suddenly and dramatically depending on what else happens in the system. Correspondingly, in financial systems, risk managers are able to assess their own institutions' exposure, but only on the assumption that the rest of the world obeys certain conditions. In a crisis it is precisely these conditions that change in unpredictable ways.

No one, for example, anticipated that an investment bank as old and prestigious as Lehman Brothers could collapse as suddenly as it did, so nobody had that contingency built into their risk models. And once it did fail, then just as the failure of a single power line increases the stress on other parts of the system, leading to further "knock on" failures, so too did Lehman's unlikely collapse render other previously unlikely failures suddenly much more likely.

Risk managers have started to pay more attention to systemic risk of late, but unfortunately they haven't made nearly enough progress. A 2006 report co-sponsored by the Federal Reserve Bank of New York and the National Academy of Sciences concluded that even defining systemic risk was beyond the scope of any existing economic theory. Actually managing such a thing would be harder still, if only because the number of contingencies that a systemic risk model must anticipate grows exponentially with the connectivity of the system.

So if the complexity of our financial systems exceeds that of even the most sophisticated risk models, how can government regulators hope to manage the problem?

There is no simple solution, but one approach is close to what the government already does when it decides that some institutions are "too big to fail," and therefore must be saved - a strategy that, as we have seen recently, can cost hundreds of billions of taxpayer dollars. As the Treasury Department's miscalculation over Lehman Brothers demonstrated, these judgments are difficult to make and also prone to errors. But the main problem - and the reason for their immense cost - is that they are made only after a crisis situation is reached, at which point only drastic actions are available.

An alternate approach is to deal with the problem before crises emerge. On a routine basis, regulators could review the largest and most connected firms in each industry, and ask themselves essentially the same question that crisis situations already force them to answer: "Would the sudden failure of this company generate intolerable knock-on effects for the wider economy?" If the answer is "yes," the firm could be required to downsize, or shed business lines in an orderly manner until regulators are satisfied that it no longer poses a serious systemic risk. Correspondingly, proposed mergers and acquisitions could be reviewed for their potential to create an entity that could not then be permitted to fail.

Government regulators telling firms they can't grow or innovate sounds like dangerous meddling in free markets. But there are at least three reasons to think that it is reasonable.

First, there is already a precedent for precisely this kind of government intervention - namely anti-trust law, which effectively guards against firms growing so large that they stifle competition. Perhaps what we need is an "anti-systemic risk" law that would aim to avert systemic risk before it is too late. No doubt firms that are denied the right to grow under this law would complain; but firms complain about anti-trust rules as well, and somehow our free market system has survived, in part because the beneficiaries of anti-trust rulings are often smaller and more innovative.

Second, the current crisis has demonstrated that markets do not automatically regulate systemic risk any more than they automatically guarantee competition. Pragmatically speaking, therefore, government intervention is required to prevent markets from destroying themselves, and the relevant question is what kind of intervention is effective: preventive management, or after-the-fact rescue.

Third, and most fundamentally, there is something badly wrong with the kind of free market that ends up at the mercy of a single firm. Failing to deal with systemic risk, in other words, creates a world that is not only uncertain, but also unjust, in that individual firms can generate immense profits by taking risks that everyone else ends up bearing as well. Taking free-market principles seriously, therefore, requires us to acknowledge that firms that are "too big to fail" are really too big to be permitted to exist.

Duncan Watts is a principal research scientist at Yahoo! Research and the author of "Six Degrees: The Science of a Connected Age." This article was adapted from the Harvard Business Review.