Category Archives: business modeling

Price Fixing and Knowledge Pools

If you sell widgets you often have a choice about how to price them. You can fix their price or you can engage in differential pricing. Differential pricing, i.e. trying to charge customers more or less depending on how much value that customer thinks he will get from the product, has the benefit of increasing the number of customers you can reach. For example you can reach thrifty, poor, low usage customers. It has the deficit of raising transaction costs, for example some customers will spend additional time shopping for price. The more the buyer is aware that approximately the same goods are available at differing prices the more resource he will likely spend shopping. Note that any time the buyer spends shopping tends to imply a lack of trust. Lack of trust implies a risky market.

Standards are a way that industries can engage in collusion or cooperation (take your pick) to temper the risk in a market for it’s participants. Here is a nice example of that. The auto-insurance industry needs data to set insurance rates. Each company has claims data which gives it a rough picture of the risk of insuring a given demographic. The demographic data available to one company is limited to it’s current customers. A company that insures mostly elderly people in Florida will have good data for that demographic.

To improve the quality of the data they pool their data. The pool is managed by a non-profit organization setup by the firms in the industry. That data then becomes the standard estimate of the risk of insuring a given car for a given class of individual. Some of this data is available on the web..

All this reduces the risk for the auto-insurance industry and leaves them to compete on other attributes: customer service, marketing, in-house efficencies. It also reduces the chance that one company will give you a better price than another since it standardizes the measures used thru-out the industry for sizing up a customer prior to quoting him a price.

Notice how the data pool is very similar to a standards body.

It helps set standard prices.

The data pool both reduces transaction costs in a market, letting it run more efficiently and lowering risk. It tends to shift the pricing from differencial to fixed.

The data pool lowers the need for firms to merge. Without it the only way to get a large pool would be to merge. Stated another way the data pool provides a way for small firms to collaborate to gain knowledge that only large firms might otherwise aggregate.

I got to thinking about this because I was seeking other examples of collaborative knowledge pooling. I.e. other than open source, where the source code is the obivous reification of the pool. Other than a classic standards bodies, where you find patent pools.

Peering contracts, say between Internet ISPs, look like fourth example.

Affection and Power

Stanley Kober writes about emerging alliances that don’t include the United States.

In his classic work, The Prince, Machiavelli wrote “a prince ought to inspire fear in such a way that, if he does not win love, he avoids hatred; because he can endure very well being feared whilst he is not hated.”

An idea we can sum up in this silly B-school style drawing. The line illustrates a cartoon version of what the Bush Iraq strategy has done for America.

Or consider the example of Microsoft. They began weak and accumulated a huge pool of affection. At the same time they became powerful. They abused that power, as amply demonstrated by the anti-trust case. Affection transitioned to annomosity. Meanwhile the bloom of Internet innovation lead to a perception that they were weak, or at least not as powerful as previously thought.

Or consider Google. Their “Don’t be evil” motto could be viewed as a strategic necessity; i.e. stay out of the left side of that graph. They currently sit comfortably in the upper right. Powerful, and therefor feared; but at the same time held in great affection.

When you are both powerful and well liked you need to avoid the trap Microsoft fell into. You need to exercise your power in ways that sustain that affection. The temptation is to exercise your power in offensive manners, achieving a series of short term wins, while doing progressive damange to your good will. One pervasive temptation is to exercise your power in secret assuming that you can have your cake and eat it too. It is always a bit difficult for an institution to distinquish true affection from meer sucking up to because it is powerful. Both of these can make everything looks fine and dandy until the bottom falls out. In the Microsoft case is almost a worse case scenario. It woke up to discover that it’s audience/customers’ affection was more calculating than they realized and at the same time they were revealed be a pretty offensive lot.

Notice how terrorism plays out on this plane. Your typical terrorist act has very little direct impact on the strength of it’s target. Modern open economies are hard to weaken with random acts of violence. Terror can weaken the states they target in two ways though. They increase uncertainty, risk. That lowers investment, which weakens the economy. I suspect that’s their largest effect on the vertical axis. A carefully targeted terrorist act can, of course, do greater harm becasue open market societies tend to condense hubs that make easy targets; once you notice them.

The terrorist, of course, has no affection for his target. The initial effect of the act of terror is to create sympathy for the victums; i.e. it moves the dot to right on that chart. But there is a scenario where the act of terror moves the dot to the left. The terrorist believes that this opponent is evil and that those who hold him in affection are deluded. If only they could see the reality of the situation then their affection would dissapate. So part of the goal of an act of terror is to force the opponent to reveal his true colors.

The extremely strong entity is particularly likely to fall right into this trap. Again consider Microsoft. When the Internet came over the horizon it create, in due course, a sense of panic inside of Microsoft. They responded by exercising a lot more of thier market power. Those of us who had watched that market power distroy other parts of the ecology weren’t suprised when that exercise was quite offensive. But this time around it was publically revealed by the anti-trust case.

Mozilla Corp.

Cool, the Mozilla Foundation has budded off a commercial taxible subsidiary. I agree with Karim. This is a very exciting development.

While we have seen numerous attempts by commercial firms to capture some of that Open Source magic. Most of these have come from people who’s motives are principally commercial. Now there is nothing wrong with those motives, but it tends to color their attempts. The motivations that serve the establishment and stewardship of a rich open commons tend to move progressively (sic) to the back burner.

It is difficult to create a hybrid in the space between these two very distinct ethical frameworks. It is not entirely clear if one even exists. What is clear though is that a lot of people from the commerical side are searching really hard to find one. I’m always happy to see search parties heading out from the nonprofit side of the space.

This is a particularly important one though.

My bemused characterization of the driving force for most open source start ups goes as follows: On the one hand we have free stuff! On the other hand we have rich CTO/CIOs! We will just stand in the middle and make money! It’s a plausible premise.

If you stick a firm into that gap there are a lot of other aspects to bridging between those two, it’s not just money. For example on the Open side you have a high value placed on the creation of a huge pool of options; while on the commerical side you have a high value placed on minimizing risk and maximizing predictablity. On the open side you have a enthusiasm for rapid release and adaptation. On the commercial side your required to synch up in tight lock step with the buying organization’s schedules. On the open side the evolution of the project is a continous negotiation among the projects particpants; a deep relationships. Participants are often locked-in. On the commercial side the relationships are kept at arms length with contracts, specifications. Buyers strive to commoditize markets with multiple vendors, avoiding lock-in. I could go on.

There is arguement to be made that the CTO/CIO side of these businesses should adapt. I have no doubt that over time they will. For example I suspec that CTOs will adapt before CIOs. But it is always hard to shift an installed base. It’s obviously hard when you dig into all the APIs of complex peice of software, like Microsoft Windows. But it even harder when you dig into the complex tissue of social webs. Changing the rules for how firms manage software isn’t easy. That’s why the CIO organizations will shift more slowly than the CTO organization; one has a much more complex social web to adapt. At minimum a much larger one.

But back to the reason why the Mozilla move strikes me as important. It’s not just that I’m glad to see experimentation comming out of the open side of things.

Firefox is key. Installed base on the client side is key. To reach large swaths of market share the Mozilla community needs to solve a consumer marketing problem. That includes finding the ways and means to move the product down the existing distribution channels. Thos channels are directly analagous to gaps between the open source community and the needs of the CTO/CIO software users.

It’s my hope that the Mozilla Corp. can enable them to leverage those channels.

Just to mix the two examples together. Consider how hard it is for a CIO to justify installing Firefox rather than IE given how extensible it is. While for a open source guy that extensiblity looks like oportunity for the CIO it looks like increased risk and hightened support costs. An open source guy thinks Grease Monkey is cool. It makes the guys in the IT department quake in their boots. A varient of Firefox that addresses their concerns is a no brainer. It gives the CIO access to the vibrant innovation around Firefox, but it allows him to limit the risks.

Exciting.

Digital Fountains in the Walled Garden

Bummer.

Returning yet again to the topic of how the producer of an information good can shift the distribution load out onto a swarm of consumers. The producer can package his content in ways that make it easier and more likely that the swarm can collaborate. For example he can distributed redundent copies of the data or provide lots of checksums.

The extreme case of this is sometimes called a digital fountain, like a water fountain. The producer sprays a stream of droplets and if the consumer can catch a cup full then he can reassemble the original cup of content. And it turns out there are some very effective algorithums for enabling just that.

Here’s a short and simple survey paper on digital fountains (pdf).

…an idealized digital fountain should …

  • A source generate a potentially infinite supply of encoding packages from the original data. Ideally, encoding packets can be generated in constant time per encoding packet given the original data.
  • A reciever can reconstruct a message that would require k packets to send … once any k encoding packets have ben recieved. This reconstruction should also be extremely fast, preferably linear in k.

Amazingly there are designs that come extremely close to that goal. But.

… Most of the work that has been described above has been undertaken by employees of or consultants with the company Digital Fountain, Inc. … the company has a number of patents issued and pending that appear to cover both the fundamental theory and specific implementation designs …

So, no network effect and this stuff will get designed into only those industrial standards that are pay to play. Damn.

Startup

Oh this made me snort!

I worked for a company who’s plausable premise was that lots of CFOs are terribly unhappy because nobody in their firms actually computes the NPV (Net Present Value) of the various projects they are considering. The software was an expert system that the CFO could give/demand that everybody use as part of the project approval process. The premise, while plausable, was wrong.

It’s easy to over estimate the demand for good practice. Particularly if your an expert in said practice.

Rivercrossing

I wish to take issue with this essay disintermediation on Ad Age. It’s really pretty good, but it’s more fun to rant. Here’s a pull quote:

But the truth is, the products that are threatened by disintermediation are not imperiled because of technology; they are imperiled because they are based on models that offer less value to the customer than competing alternatives. In example after example, the middleman isn’t being cut out. He’s simply being replaced by a better one.

This is worth repeating: What we have grown to call disintermediation is, at the end of the day, simply the cold reality of someone doing our job better than we are. If you sense the cold breath of “disintermediation” on your back, more likely than not a bunch of upstarts are delivering your business’ core value proposition for less cost and in a better fashion than you are. And while it seems a bit obvious, it’s nevertheless true: You’ve probably fallen victim to old Railroad disease – you thought you were in the train business, but meanwhile, the other guys have figured out a better approach to moving cargo around the country.

That’s misleading.

Intermediaries are like bridges over a river. They provide the populations on either side of the river a means for getting to each other. The newpaper for example provides a bridge between advertisers on one side and consumers on the other. The marketplace provides a way for buyers and sellers to find each other, transact their business, and clear the resulting books.

The event of disintermediation is always associated with somebody building another bridge. If you own a bridge, lucky you, then new bridges are a threat to your business. Technology both enables new bridges at lower cost, but it also makes it easier for customers to reroute their behavior so they can use the new bridge. To say that the bridge owners are “not imperiled because of technology” is bogus.

A bridge is a huge capital asset that owners typcially spend long time and effort to acquire. For example ClearChannel here in the US spent a lot both thru political manuvering and acquisitions control a large chunck of the nation’s radio stations. They did that because they viewed that channel (bridge) as a powerful intermediary that they could charge a nice toll for folks to travel over.

If you own an a large expensive asset it’s a good idea to keep I eye on how hard it is to build a substitute. What made the asset expensive in the past is often not what makes it hard to reproduce today.

Maybe the railroad guys were blind to how automobiles were going to substitute in the role of cargo handlers, but I doubt it. The railroads substituted for canals. And all three, canals, railroads, and highways arose as substitutes thru the usual combination of government support and technology. This story has been going on for a very long time.

When the new bridge appears it offers some set of features. Those features, compare to those of the older bridge, are better than the existing bridge. Over time customers see that and start to give a portion of their business to the new bridge. So yes the old bridge owner can look at that as “simply the cold reality of someone doing our job better.” That’s true but terribly incomplete.

The new bridge changes the nature of the market. It reframes the measures of quality for the market. This usually causes the market to get larger. It always causes a huge disruption in what the definition of quality is. The old bridge owner hardest puzzle, when the new bridge open up, is shifting from a world in which he knows what quality means into one where it’s up for grabs. The new entrants want to redefine quality, pulling it toward what ever is to ther competitive advantage. These advantages are likely a direct fall out of what every force in the environment enabled them to build their new bridge. Again that’s often technology – though it can just as likely be a shifting regulatory climate, demographics, etc.

While the essay’s blith setting aside of the role of technology is what first pulled my cord, it’s this second more subtle failing that I find more problematic. The essay ends up advising the old bridge owners to return to their roots, to retreat back into their comfort zone, as they look for the key source of qualities to emphasis as they adapt to opening of a bridge next to theirs. This is a bit like advising them to put a new coat of paint on the snack bar next to the toll booth and send the toll collector’s uniforms out to be laundered. It’s what they want to hear, but it’s not what they need to understand.

Old bridge owners rarely become irrelevant, in fact they often continue to grow. After the dust settles the old bridge owners find they are in a market who’s consensus about what attributes define quality has changed. Attributes that define quality are not an absolute. In one time frame quality maybe defined by safety, while in another it maybe defined by price, and then later it maybe defined by convient availabity. If the old bridge owner is to remain on top, in market share terms, then he needs to shift his value proposition toward the newly emerging attributes and depreciate the ones that used to be critical.

When the definition of quality changes the market becomes extremely confusing. It is some comfort for the old bridge owner that the new bridge owner probably doesn’t understand what the new rule are either. The new bridge owner leveraged an oportunity given to him typically by technology, but he doesn’t know what qualities his customers care about – he just has some guesses. But the new bridge owner has something of inestimable value in working out what the answer is, customer contact.

The article that triggered this rant is interesting. It does, to a degree, advise the old bridge owner that he needs understand what the emerging quality vector space is. But it pretends that the answer to that question can be gotten off the shelf. I don’t think that’s true. I think you can’t ask a mess-o-pundits for the answer. First because they aren’t in the trenchs, second because they don’t do a very good job of admitting how everything is in flux, and third because nobody is going to take action of the magnitude that is required in these situations on the advice of outsiders. You have to find a way to get real customer contact and let that drive your adaption.

You gotta go down river. You gotta live in the village emerging around the new bridge. And if at all possible buy one. Not for the usual rolling up market share reasons, but so you can inform your confused self thru direct experiance.

Market Pricing and Cellphones

A friend of mine has traveled to South Africa from here in the US. She now has a cellphone and was pleased to report that incomming calls are free!

Here’s a little insta-theory I’ve constructed about that.

If you own a market place you charge the buyers and sellers who come into your market fees for the services provided by your market making activities. There are lots of different kinds of markets. For example eBay where they charge the sellers. Singles bars (aka meat markets) where they tend to forgo charging women the cover charge.

Phone companies provide markets where calls are transacted. Instead of buyers and sellers they have callers and call recipients.

It’s very rare for a market maker to charge both sides of the transaction exactly the same amount to come to the fair. The sellers at the boat show pay a lot more for access. But if it’s a enough good fair the owner can charge the crowd to get into the tent. As long as the market owner can distinquish the buyers from the sellers then he can use that information to price discrimate. Or to put it in more general terms he can charge more to the ones who are most willing to pay.

Lots of things drive willingness to pay, but generally the wealth are more willing to pay than the poor and that’s my theory.

In the US cell phones emerged as a toy for the rich. So when the cell phone first emerged the phone owners, on average, were more willing and able to pay for the calls than the average line line owner. When cell phones showed up most everybody here had a land line. The market owners, i.e. phone companies, naturally setup the pricing to charge the call phone side of the market.

The situation was totally reversed in many other regions. Land lines weren’t common. In fact having a land line was a signal to the market owner that you were better off than the average citizen. Owning a land line was a signal of higher willingness to pay, compared to the average citizen. The cell phone companies sold their product to the unserved market, i.e. a market on average less well off and hence with lower willingness/ablity to pay. So it was natural for them to charge the land line side more than the cell side.

Charging the call initator for the call is a proxy for what they really want to do, which is to charge the well off land line callers more to call over into less well of population of cell phone owners.

It’s an insta-theory. It would be real work, which I’m to lazy todo, to flesh it out completely. For example there is an entire subplot about cross border call termination that should have similar themes.

I find it facinating how a pricing model that emerges early in a market’s life can persist, making it extremely difficult for the market to transition into new forms.

Shifting Distribution and Coordination Costs

Here’s a slightly formal way to look at various ways of coordinating an activity. This grew out of my thinking about how to push content from producers to consumers without introducing a hub that coordinates the work. I was thinking about who bears the bandwidth costs.

One obvious way to solve the problem is to have the producer ship the content directly to all N consumers. He pays for N units of outbound bandwidth and each of his consumers pays for 1 unit of inbound bandwidth. The total cost to get the message out is then 2*N. Of course I’m assuming inbound and outbound bandwidth costs are identical. If we assume that point to point message passing is all we’ve got, i.e. no broadcast, then 2*N is the minimal overall cost to get the content distributed.

Two issues. All else being equal the producer would like to shift costs onto the consumers. Secondly – the hard problem here is not moving the bytes around; the hard problem is coordinating their movement. In the our first model most of the coordination cost is born by the producer. That has the benefit that coordination expertise will accumulate so that the cost of coordination can fall and the quality can rise. The producer retains substantial control over the relationships.

It’s not hard to imagine solutions where the consumers do more of the coordination, the cost is split more equitably, the producer’s cost plummet, and the whole system is substantially more fragile. For example we can just line the consumers up in a row and have them bucket brigade the content. We still have N links, and we still have a total coast of 2*N, but most of the consumers are now paying for 2 units of bandwidth; one to consume the content and one to pass it on. In this scheme the producer lucks out and has to pay for only one unit of band width, as does the last consumer in the chain. This scheme is obviously very fragile. A design like this minimizes the chance of coordination expertise condensing so it will likely remain of poor quality and high cost. Control over the relationships is very diffuse.

We can solve the distribution problem by adding a middleman. The producer hands his content to the middleman (adding one more link) and the middleman hands the content off to the consumers. This market architecture has N+1 links or a total cost of this scheme is 2*(N+1). Since the middleman can server multiple producers the chance for coordination expertise to condense is generally higher in this scenario. Everybody, except the middleman, see their costs drop to 1. Assuming the producer doesn’t mind being intermediated he has incentive to shift to this model. His bandwidth costs drop from N to 1, and he doesn’t have to become an expert on coordinating distribution. The middleman becomes a powerful force in the market. That’s a risk for the producers and the consumers.

It is possible to solve problems like these without a middleman, instead we introduce exchange standards. Replacing the middleman with a standard. Aside: Note that the second illustration, Consumers coordinate, is effectively showing a standards based solution as well. We might use a peer to peer distribution scheme, like Bit Torrent for example. To use Bit Torrent’s terminology the substitute for the middleman is called “the swarm” and the coordination is done by an entity known as the “the tracker.” I didn’t show the tracker in my illustration. When bit torrent works perfectly the producer hands one Nth of his content off to each of the N consumers. They then trade content amongst themselves. The cost is approximately 2 units of bandwidth for each of them. The tracker’s job is only to introduce them to each other. The coordination expertise is condensed into the standard. The system is robust if the average consumer contributes slightly over 2 units of bandwidth to the enterprise, it falls apart if that median falls below 2. A few consumers willing to contribute substantially more than 2N can be a huge help in avoiding market failure. The producer can fill that role.

Of course swarming is not the only way we can arrange a standards based solution to this problem. It’s notable because it is both reliable and the total bandwidth cost can be 2N, the minimum. I find it interesting that when the cost approachs that minimum the swarm becomes unreliable. The second model where consumers coordinate the distribution in a bucket brigade can be made more reliable by introducing additional redundant links; these are another way to buy reliablity in exchange for increasing the cost above the 2N minimum.

I find it fascinating to see how the coordination costs, market power, and reliability of the market’s clearing are shifted around in these various scenarios. The bandwidth costs act as a partial proxy for those. Market participants are most concerned about risk. They want to place their faith in a market structure. Once the rendezvous around a given structure then can have a meaningful discussion about the risks that structure creates. The first model has the risk of powerful producer. The second and last models have the risk of policing standards compliance. The middleman has well known agency risks.

Standards based solutions always have problems with policing and freeloading. I think it’s neat to notice that if the producers and consumers are exchanging data over many time periods they can establish a trading framework with reputation, currency, and market clearing schemes that assure that everybody contributes their 2 units of bandwidth. In effect you can make such systems self policing in much the same manner used in a competitive market. Which goes to reenforce the way that exchange standards create and shape markets.