Scale free networks are said to be robust. But they are robust in only a peculiar statistical sense. Consider, as an example, the road network. The roadways in most regions are scale free networks. They are robust in the sense that if you select a random road in the network and remove it from service the effect on traffic is very slight. Obviously this is just a trick; the randomly selected road is typically going to be some minor dirt road way out on the tail of the distribution. The effect of it’s removal will likely go unoticed until somebody decides to drive up there next fall to collect a load of firewood.
Random attacks are only one kind of risk systems need to guard against. Pick the right intersection and you can do plenty of harm to a region’s road network. The random failure model assumes that the source of trouble is uncorrolated with the topology of the network. Scale free networks seem almost brittle if the failures are aligned with the hubs in the network.
There are key intersections in the local road network that often fail, because traffic is perfectly aligned with the network’s hubs and a simple fender bender can take the intersection out of service. It would be much worse than it is if the system didn’t adapt. The failing hubs attract the attention of highway engineers and they adjust the traffic flows to reduce the incidence traffic related failures.
Various sources of failure raindown on systems like these. If the rain is totally random then the scale free network is robust because the hubs are hidden in a mist of far far less critical hubs. If the system threads are aligned with the topology of the network, like traffic on the roads, then the system can respond in one of three ways. It can add resources to the hub. You can change the topology of the network. You can offensively attempt to stop the source of the failures.
For example shift this over into a biological frame. Radiation rains down on the animal continously triggering numerous tiny failures but the scale free system design makes that chance of total system failure from quite low. The body’s various organs serve as hubs and each one of the has evolved to accumulate redundancy and numerous of special case adaptions to deal with the problems that arise from the ebb and flow of the work they perform. And then there is always good hygiene.
But notice that changing the topology of the network isn’t on the list. I suspect that’s because I don’t really know anything about biology. Maybe that if you want to see changing topology you have to shift from single species to ecologies?
It’s hard for individuals to shift topology. The city maybe at risk because it has only one bridge over the river and the traffic engineer can say “you really need to build in some redundancy,” but it is hard to get a redundent topology to emerge if the system has already condensed into a topology without one. It’s particularly difficult if your going to have to shift resources to make it happen. Displacing people from their property or taxing the efficency of the existing system to fund the shift. For this reason systems tend to fail rather than adapt. It’s notable that the scalefree topology is actually resistent to change.
It seems relevant to point out at this point that there is a large section of the US highway system that isn’t scale free. The interstate highway system – built by military minds – isn’t. That seems relevant because if you fear an attacker who is extremely smart you may prefer to design a system that forgos some of the efficency advantages of a perfectly scalefree network and instead design a system that is topologically more random. Maintaining that will be hard though, because over time the system will evolve toward the scalefree pattern as it seeks the efficency of hubs. In fact over the decades the interstate highway system has become less a grid and more a scale free network.
The assumption that adaptible systems can always respond to any problem is just plain false. We don’t have two hearts and we are unlikely to evolve a second one. The only time that it’s true is when the attack on the system isn’t fatal, but only painful enough that it allows the system time to adapt. If the system is attacked at it’s hubs with sufficent vigor it will collapse.
Since scale free system appear in so many venues these points apply in many venues. The military comming out of the second world war was very sensitive to these issues and the design of the interstate highway system reflected that. They traded efficency for a more resilient topology. The design of the internet was similarly intentionally laid out with that in mind, though most of that’s been lost. I suspect this is one reason why London’s circle line, which forms a kind of hub, was the target of the terrorist’s attacks.
Ben, along these lines it might be worth looking at Doyle’s work on Highly Optimized Tolerance http://www.cds.caltech.edu/~doyle/CmplxNets/
aka even if with two hearts, we could still be done in by a thin sheet of plastic (aka this bag is not a toy)
rdf – Ah, you caught me! I’ve been stewing on one of his papers for months now. Thank you much for the pointer to the others! It’s interesting to intersect those ideas with the ones in the book “Normal Accidents.”
Pingback: Ascription is an Anathema to any Enthusiasm › How Complex Systems Fail