Category Archives: power-laws and networks

Governance of Overlapping Groups

There are very very few works that directly address how to control the slope of the power law curve. Clay Shirky’s essay on inequality for example. Michael Porter’s list of things that keep an industry fragmented is another. Both of those enumerate tools that create niches, barriers and membranes. Each one of those tools deserves it’s own book – or at least a page in a wiki.

Rereading Clay’s essay I notice that one of the techniques he mentions is to reduce the amount of heterogenity in the system; the example he gives is hierarchy – i.e. the millitary. I’m reminded that hierarchy is one of the common solutions to the coordination problem around collective action.

It’s a cultural truism that centralized hierarchtical system often run amuck and become the prime source of abusive of power and severe inequality. The lession embedded in the power law story (i.e. appreciating that homogenous networks spontanously create power-law distributions) is: “Hold on homogenous creates hubs, and those hubs have power, and that power is just as susceptible to abuse as any centralized hierarchtical design.

Rereading Clays essay this week, this week, I’m struck by the way that firms often attempt to temper the problems hierarchy creates by creating multiple overlapping hierarchies; e.g. functional organization/proffesional hierarchies for that overlap project/product-line organization.

As a move in the game of systems design it provides a kind of check and balance. A means of auditing and oversight. Additionally it helps in creating safe niches for key activities. Inside those niches various kinds of skills, resources, public-goods can emerge.

I have model, totally ungrounded by the facts, that as the population grew the three spheres of commercial, civic, and religious activities broke apart. When people talk about the seperation of church and state they are speaking of that historical event – but just as critical was the way that commerce began to become distinct from the creating of public goods that is the work of the civic sphere.

Governance of such tangles of overlapping groups is a facinating mess. The coordination problems run deep; as they must if the groups are to remain distinctive.

A friend and I got to wondering yesterday if there are any examples of systems were tribe A elects the elites of tribe B and visa versa? You can see plenty of cases where a dominate tribe appoints the elites of a subordinate tribe. You can seem plenty of places where one tribe has numerous moves where it can check and balance the moves of other tribes.

It would be a very interesting university where tenure choices in each department were always made by other departments, for example.

Category

David Chess writes about categories.

Some categories really do exist. For example there really is something called an elephant. You can decide if X is an elephant. Elephant category exists because there is some continuity in the genetic thread that runs thru members of the species.

Of course, even the edges of a category like that are rough. In the very very long term the tree of evolution makes them rough. In the very short term the life experience of each individual gives them unique personality.

I think a lot of people presume that a similar continutity runs thru other systems. They project what they know about categories in species onto populations where no force as powerful as the genetic thread exists. When this doesn’t work they draw analogies to the roughness that arises in nature. Other systems that appear to have continuty are forever undergoing frankinstein like hybridization. It’s all much more confusing and fuzzy than the world of species.

Lots of systems don’t have any useful sharp boundries. For example there just isn’t a useful sharp boundary between a blog, a newspaper, or a firm’s updates to it’s product catalog.

One of curious aspects of the power-law discussion is how the elite catagory in a network create the impression of a category. For example are the top 5000 firms on the planet a good sample for informing your model of what a firm is? Would a sample of the bottom 5000 or the middle 5000 firms be better? Clearly a study of the top 5000 isn’t going to inform in a useful way your model of the bottom or middle 5000. The word firm – well, it’s a mess.

This problem infects every discussion of a population with a power law distribution. The population appears to exist as a category; but as you attempt to describe it there is a tendency to sample one sub-population and project what you learn from that upon the class. For example if you study the a-list blogs and learn almost nothing about what is going on in what you might name the micro-blogging community around teenagers. You study open source and discover that most projects as Source Forge are tiny – do you declare them to be irrelevant?

It’s an interesting puzzle words – it would appear – just breakdown when you attempt to think about these populations.

Situated Software #2

A conversation with Brian reveals the interesting insight that many products emerge first as situated software; to use Clay’s delightful term.

Consider for example MovableType. It emerged situated in the intersection of the set of people that could install Perl and the set of people who manifested a desire to rant (or reveal). For the longest time that community of people both limited it’s growth but at the same time forgave it any number of short comings. Finally the folks who wrote it budded off TypePad which eliminated a the Perl expertise portion from the embedded situation.

If you want to get dragged into conversation about membrane design again then you can turn up the knob and rant about how that’s another example of how public goods can be captured by private entities. But that’s somebody else’s rant.

All this is analogous to Clayton Christensen’s rants about how disruptive products seem to always emerge serving unserved customers. Such customers are both forgiving of the products short comings and they provide a strong demand signal that helps to shape the product that emerges. Both of these are necessary preconditions for creating something new.

Oh no, there are now two dudes named Clay in this posting; what’s up with that!

In any case.

Another point to make about situated software is this balance between a forgiving environment and a strong signal that helps the software to adapt.

“adapt” – I stole that from Stefano.

The challenge in making a thing survive over time is getting it to adapt.

So among the benefits that a piece of situated software draws from it’s environment is this adaptive advantage. More forgiveness. Better feedback.

One reason that real world software is so much harder is that it’s unforgiving. One reason that Windows has thrived is that they have demanded a high level of forgiveness from their users. They get to do that because of the monopoly.

Economists should stop being so fixated on the pricing advantages a monopoly captures and turn their attention to the adaptive advantages. Oh wait, I’m getting dragged back into the membrane design discussion; I hate that discussion.

Situated software

Clay’s new essay about what he calls situated software is very important. He is saying something new about software. I don’t think he’s run the idea to ground yet, which makes it even more fun.

He says the idea arose out of observing how his students executed an assignment – build a web app useful to our local community. The key insight was that some of these applications leveraged the complement of that assignment – build a web app that makes use of the local community.

Clay is close to asking a question that I’ve been puzzling about in various forms for the last few years. If we embrace that group membranes are a good thing then what kinds of things are possible inside of the membrane? It is hard to get to this question since so much energy is devoted to worrying over the membranes.

Clay frames that question by introducing the idea that there is a class of software that we could call “situated software.” Software that’s inside the membrane. Software that is dependent on the social contract or physical reality it runs inside of. He contrasts this with what he names “web school” software; i.e. software designed to run in cold cruel world, the globalized, firewall free world. That other world is, of course, interesting because the huge audiences create the chance at minimum of drawing on a huge sample space of users, and at the maximum of spinning up huge network effects. But yeah! Ignore that. What if I want to create software that empowers huge numbers of groups.

You can reduce what Clay’s saying in this essay; should you want to, by could mapping it into various conventional frameworks. Nothing wrong with that, as long as we take care to avoid killing the baby. Clay even does some of that in the tail end of the essay.

To some extent the idea is like personalization; i.e. software that adapts to it’s users. To some extent it’s like localization; i.e. software that adapts to the culture of it’s users. To some extent it’s like the things that we too casually repeat about the democratization of software development; i.e. that exciting things happen when users are empowered to author their own solutions.

I have gotten a degree of milage out of that last one. It gives you a long series of historical events you can pick apart. In each you can look at the social unfolding of what happens when software becomes more local. Lots of examples: minicomputers, PCs, spreadsheets, desktop publishing, drum machines, Hypercard…. Clay’s essay makes me tempted to go back and think about each of them again as a “localization event,” as enabling situated software to emerge. That’s more inward looking, which is what I want, than my usual model for these, (i.e. undermine the ivory towers).

He talks about how one of the student applications, a market making app, could skip having to build the module we find in market-makers like eBay which try to reify the buyer/seller reputations. His student’s applications could gloss over that because the local community’s reputation system was already available. It was in the platform so to speak.

That reminded me of how often I’ve encountered in-house one of a kind systems with no training materials what so ever. You do occasionally hear complaints about the lack of training materials (or doc); but generally the social networks of the organization are entirely sufficient to make up for the lack of doc. In fact I’m reasonably confident that the lack of doc often turns out to be something of a positive for both the local social network’s health and the applications in question. At minimum it makes adapting the applications easier. Documentation has a tendency to be like a coating of varnish making an application shinny and immovable.

Similarly he talks about another sample application where the problem of how to manage the dialog with your users is solved by physical presence. In what he calls “web school” or real internet application that problem bleeds into the entire can of worm around email preference management and privacy policies. Who in their right mind asks their bank to send them more email?

In the example he gives the students built kiosks put them in a public space, devices with minimal UI. Both the minimal UI and the use of public space are things I’ve noticed too. The first time I noticed it was with spreadsheets – an early success at software democratization. You would not believe the percentage of spreadsheets that contain no calculation what so ever; instead they live their lives as ritual artifacts placed on a table in a meeting. (information radiator, is a similar story).

Thanks Clay!

Choice

A couple notes on choice.

This is a nice review of a book I need to read on the excess of choice in modern life. One story from the review: many young people arrive in their thirties having failed to make any choice about their line of work. The modern world both does not constrain their choices. They are taught to value above all else a diverse portfolio of options, i.e. freedom. They haven’t specialized. They have no depth of expertise.

Then on NPR I hear this story. This guy took an oath to surf every single day. He’d taken the oath last time there was a February 29th on a Sunday and he swore to continue until the next time it happened again. Seems like a reasonable work around for the problem of excessive choice.

This is one of my primary interests, i.e. the question of loyalty. Consistent behavior is one of ways we provide a model for others that they can react to. Consistent behavior is one of the ways we simplify the near infinite cognitive load of moving thru day to day life. Consistent behavior is one of the corner stones of durable collective action. Consistent behavior is what gives us culture, structures, standards. Consistency is the means we use to lower negotiation costs. It’s just too bizarre to pretend that the collective society could be renegotiating the entire social contract every few moments.

So I’m more than a little surprised that apparently philosophers have settled into the presumption that to “honor sunk costs” is a fallacy. Seems to me that the philosophers have been become a bit too loyal to the catechism of the church of portfolio theory.

Finally I recall that when I visited Ireland one of the citizens cornered me wanting to know what I thought of the high divorce rate in the US. Much later I learned that the percentage of folks who are married in the US is substantially higher than that found in Ireland. (You may consider all this to be hearsay.) A fact that reminded me then and now of how the absence lack of a dominate church in the US seems to create a higher level of church membership than that found in countries with a more centralized church.

My take on all this is that clearly if you limit choice substantially (demanding very high loyalty) you get rigid cartoons of real groups. If you create extremely high levels of choice you get very transitory groups that never achieve any depth to their collective activities. It’s one of those not to hot not to cold problems.

Bonus link. It appears that German has a word for Slovenly Peter.

Info Axioms – Metcalf’s Law

Via Sam we find this site that is attempting to collect a set of fundimental information axioms. A very impressive set of people are involved. This enterprise is reminisant of the wonderful book Information Rules as well as a number of other efforts such as those mentioned here.

I’m finding it fun to treat these axioms as a list of homework assignments. I.e. “Critique axiom #N”.

For example here is their first axiom:

Axiom 1 – Metcalf’s Law

If there are n people in a network, and the value of the network to each of them is proportional to the number of other users, then the total value of the network (to all users) is proportional to n X (n-1) = n2 – n (Shapiro and Varian, 184).

Member aggregation is more important than the type or amount of resources owned (Hagel and Armstrong, 14).

Recognizing that a network’s value might in fact rise O(n2) is another way of saying that there is a scale advantage to any system that has network nature; and it tries to give the reader some intuition about the magnitude of that advantage. N2 seems like a lot of advantage.

Most of use encounter scale advantages first on the production side of things. That a large car manufacture is more efficient than a small one because he can share certain fixed costs (R&D, administration, accounting, whatever) more widely. Of course that kind of scale advantage isn’t N2; in fact it tends to drop off so that after a while the benefits that acrue from merging to huge automakers are pretty minimal.

What has come to bother me about Metcalf’s law is that I suspect there are similar limits to it’s power; limits that arise from the limited attention and limited flexiblity of the parties around the edge of the network. Each time you double the population on the edge of the network it will take me some time to find out that some of the new members should replace my current correspondents – since they are more valuable correspondents than my olde ones. The new participants are, in effect, a sample of the universe of all possible correspondents. Additionally is there an arguement along these lines: a some point the sample becomes large enough that new samples do little to increase the value of my total set of correspondents.

My second concern about Metcalf’s law is revealed by Reed’s Law – i.e. that the issue isn’t how many correspondent pairs are enabled by the network but rather the number of groups that the network enables to form. That number is truely mind boggling large. So large that if these groups are actually able to form it suggests that the network is assured of triggering a total denial of service attack on our attention. A syndrome I’m sure some of use have noticed from time to time.

I find Reed’s law a much more convincing model than Metcalf’s. I find it a much more compeling model because I’m convinced that groups (i.e. graphs) are a more useful unit of conceptualization rather than links. Of course it too must have scale limiting syndromes that need to be understood.

My third problem with both Metcalf’s law and Reed’s law is the manner in which they gloss over the issue of what the hell is do you mean “value.” Value doesn’t exist in the abstract. Value only exists in terms of some constituency.

If a conversation takes place, intermediated by the network, between P1 and P2; the value created there may accrue to P1, P2 or it may accrue to any of the entities that deploy the network N1..Nn; or it may accrue to the groups G1..Gn that enclose those players. For the economist gazing down from his ivory tower it maybe all well and good to just say “the economy” nameing thus one of these groups; or maybe te union of these groups; but down in the trenchs these issues are central to problem at hand.

For example you can design a network that encourages value generation for different ones of those entities. It’s not good enought to just say the network will have value and there fore we should go get one. You really have to make choices about how to encourage or channel the value that emerges.

In the end that’s what’s wrong with Metcalf’s law; important as it is. It implies that the value arises in the pairs – rather than say the groups. The model of value colors how think about the network; and if you get it wrong you’ll be surprised in unfortunate ways. For example I think Metcalf’s law is about right for the telecom network, while Reed’s law is closer to right for the Internet. That one key reason why the telecom industry is burnt toast.

But, not to put too fine a point on it and remembering this is Axiom #1 for God sake, I think that Metcalf’s law get’s the order of the overall value too low, and that Reed’s law get’s the order of the value too high. That the right model is one revealed in the power-law distribution that emerges from networks; and that model’s controling terms are those revealed in the slope and bounding box around that curve. Somebody more mathematicly inclined than I will have to frame that in a manner that competes with Reed’s law and Metcalf’s law in on their own modeling terms.

Ah, that was fun … if you can’t rant in your own blog and all that…

I do love an ellipsis…

Turntable – network effect toy.

When I first started thinking deeply about network effects I was very pleased to realize that there are some little examples: pot luck dinners, flea markets. And one of my favorites was the turntable, a toy found in playgrounds.

kidsonround3-s.jpg
These days I’m more interested in how you get these network effect things to start, or how you create new turntables. How you draw off something from the resulting energy that assures they can be self sustaining, maintained.

So I just can’t believe how wonderful this example is. Via Rebecca Blood; a turntable that taps the energy in children’s play to pump water. What a tour de force! There are 500 of these in South Africa.

“I want to thank all the little people.”

Gee wiz! boffens reveal that the little people create and the hubs in the power-law network just move those creations around!

Say it’s not true!

Pretty soon they will discover that Google didn’t author all those pages? The folks at Penguin didn’t write those classics? eBay actually doesn’t have _any_ of that cool junk their selling? Disney didn’t invent those wonderful stories? The librarian didn’t bring me those books?

The leaves of the network dig the potatoes, the hubs shuffle them around. Damn middlemen!

seeds of it’s own distruction.

Tim Bray posts an interesting comment.

I was talking today to this really smart guy named Jonathan Leblang who works for A9, and he said  “You know, Google’s success may conceal a death warrant.”  I said “Huh?”  He said “Well, the most useful Web pages used to be the ones that aggregated a bunch of useful links, and so people would point to those and Google would find them. Nowadays, why would anyone go to the work to put a page like that together if you can just rely on Google to find stuff?” Hadn’t thought about it that way.

Let me pick that apart because it’s both deeply true, but in this case it’s false.

First off it catches my interest because I’m a deep believer in the hypothisis that good things happen in the presence of a rich, deep, complex supply chain of stuff you can cobble together. That a house should be full of stuff, that a brain needs a lot of ideas in it, that a city is better than the country because it’s full of all kinds of stuff. You could say I’m more into the fecundity of clutter than efficency.

One example of this struck me when I went to Paris the first time: the incredible diversity of food products in the markets. How could you possible create a culture of food without that wealth of ingredients? That a rich network of supply of a given kind creates a culture for that kind of activity. This is why cities tend to specialize.

The first half of the insight that Jonathan Leblang is noticing is that to solve a hard problem – like the “find what I’m looking for” problem that Googles is working to solve – you first have to accumulate a huge assortment of ideas for how to solve it. I’m confident that Google and eBay are in the Bay area because the rich culture of ideas, supplies, and other stuff needed to nurture firms of that kind.

Jonathan notes that Google depends on a particular kind of supply; i.e. the supply of high quality pages full of links on a given topic.

The second half of the insight, the bleak half, is that as you solve the problem sufficently then your reward will be to capture more and more of the users. You will aggregate all the demand and this will lead to starving out the supply. As the network effects begin to kick-in the users will start defaulting to your solution and alternatives will die out.

That is what’s happened to food in the US. Industrial franchised food consumes so much of the demand for food that there remains only a very small niche for food that’s not highly standardized. Eatting in the rural regions of the US can be an amazingly consistent dull experiance. You have to work hard to avoid it.

This is just another way to look at what happens as the slope of the power-law curve begins to steepen and standards fall into place. Or to recall the cliche good-enough drives out the better.

Now this is the interesting thing. I think this model doesn’t apply in the case of Google. That in this case we are looking a very common kind of confusion about the nature of “stuff.” Physical-stuff (grapefruits, car parts, computers) is different from information-stuff. The linkages of supply and demand are different. So the arguement he’s making assumes that the supply of pages of aggregated links is somehow connected to the demand for such pages. That is just not true, or at least it’s not true in the same naked darwinian scarcity of niche sense that it’s true for the food vendors.

People are packrats; they create a supply of those pages that aggregate stuff about what ever they are interested in independent of the demand for those pages. So even if Google captures more and more of the traffic of folks looking for stuff; people will still make collections of info-stuff. The specialist blogging pages are a fine example of that. I don’t write this blog for my audience, I write it for me. (At this point we could run off on a subplot that about Google acquiring blogger to incourage supply.)

That’s entirely different than the food markets of Paris. If demand for passion fruit wains the supplier of passion fruit will too. But for info-stuff, a Martin Geddes recently said over at Telepocalypse: “No scarcity, no market, no problem.”

That said, there is a scenario where info-stuff begins to look like physical-stuff; i.e. when it begins to spin up in a strong network effect. Thus Apache’s httpd server (who’s network effect of mindless adoption coupled with it’s rich community of complementary products and services) does drive out other web servers. It drifts toward becoming a physical object because it’s network effect has some physical qualities to it.

Open standards can help temper that effect; the HTTP standard does help avoid the entire web server market collapsing into a singularity. The vicious competitive attempts to make the HTML standard more proprietary go a long way toward explaining the near singularity that developed around Microsoft’s Internet Explorer. If your attempting to create public goods around info-stuff – closed standards enable scarcity, market, problem.