Here’s a slightly formal way to look at various ways of coordinating an activity. This grew out of my thinking about how to push content from producers to consumers without introducing a hub that coordinates the work. I was thinking about who bears the bandwidth costs.
One obvious way to solve the problem is to have the producer ship the content directly to all N consumers. He pays for N units of outbound bandwidth and each of his consumers pays for 1 unit of inbound bandwidth. The total cost to get the message out is then 2*N. Of course I’m assuming inbound and outbound bandwidth costs are identical. If we assume that point to point message passing is all we’ve got, i.e. no broadcast, then 2*N is the minimal overall cost to get the content distributed.
Two issues. All else being equal the producer would like to shift costs onto the consumers. Secondly – the hard problem here is not moving the bytes around; the hard problem is coordinating their movement. In the our first model most of the coordination cost is born by the producer. That has the benefit that coordination expertise will accumulate so that the cost of coordination can fall and the quality can rise. The producer retains substantial control over the relationships.
It’s not hard to imagine solutions where the consumers do more of the coordination, the cost is split more equitably, the producer’s cost plummet, and the whole system is substantially more fragile. For example we can just line the consumers up in a row and have them bucket brigade the content. We still have N links, and we still have a total coast of 2*N, but most of the consumers are now paying for 2 units of bandwidth; one to consume the content and one to pass it on. In this scheme the producer lucks out and has to pay for only one unit of band width, as does the last consumer in the chain. This scheme is obviously very fragile. A design like this minimizes the chance of coordination expertise condensing so it will likely remain of poor quality and high cost. Control over the relationships is very diffuse.
We can solve the distribution problem by adding a middleman. The producer hands his content to the middleman (adding one more link) and the middleman hands the content off to the consumers. This market architecture has N+1 links or a total cost of this scheme is 2*(N+1). Since the middleman can server multiple producers the chance for coordination expertise to condense is generally higher in this scenario. Everybody, except the middleman, see their costs drop to 1. Assuming the producer doesn’t mind being intermediated he has incentive to shift to this model. His bandwidth costs drop from N to 1, and he doesn’t have to become an expert on coordinating distribution. The middleman becomes a powerful force in the market. That’s a risk for the producers and the consumers.
It is possible to solve problems like these without a middleman, instead we introduce exchange standards. Replacing the middleman with a standard. Aside: Note that the second illustration, Consumers coordinate, is effectively showing a standards based solution as well. We might use a peer to peer distribution scheme, like Bit Torrent for example. To use Bit Torrent’s terminology the substitute for the middleman is called “the swarm” and the coordination is done by an entity known as the “the tracker.” I didn’t show the tracker in my illustration. When bit torrent works perfectly the producer hands one Nth of his content off to each of the N consumers. They then trade content amongst themselves. The cost is approximately 2 units of bandwidth for each of them. The tracker’s job is only to introduce them to each other. The coordination expertise is condensed into the standard. The system is robust if the average consumer contributes slightly over 2 units of bandwidth to the enterprise, it falls apart if that median falls below 2. A few consumers willing to contribute substantially more than 2N can be a huge help in avoiding market failure. The producer can fill that role.
Of course swarming is not the only way we can arrange a standards based solution to this problem. It’s notable because it is both reliable and the total bandwidth cost can be 2N, the minimum. I find it interesting that when the cost approachs that minimum the swarm becomes unreliable. The second model where consumers coordinate the distribution in a bucket brigade can be made more reliable by introducing additional redundant links; these are another way to buy reliablity in exchange for increasing the cost above the 2N minimum.
I find it fascinating to see how the coordination costs, market power, and reliability of the market’s clearing are shifted around in these various scenarios. The bandwidth costs act as a partial proxy for those. Market participants are most concerned about risk. They want to place their faith in a market structure. Once the rendezvous around a given structure then can have a meaningful discussion about the risks that structure creates. The first model has the risk of powerful producer. The second and last models have the risk of policing standards compliance. The middleman has well known agency risks.
Standards based solutions always have problems with policing and freeloading. I think it’s neat to notice that if the producers and consumers are exchanging data over many time periods they can establish a trading framework with reputation, currency, and market clearing schemes that assure that everybody contributes their 2 units of bandwidth. In effect you can make such systems self policing in much the same manner used in a competitive market. Which goes to reenforce the way that exchange standards create and shape markets.