Monthly Archives: August 2005

BlogDay, are you ready?

BlogDay posting instructions:

  • Find 5 new Blogs that you find interesting
  • Notify the 5 bloggers that you are recommending on them on BlogDay 2005
  • Write a short description of the Blogs and place a a link to the recommended Blogs
  • Post the BlogDay Post (on August 31st) and
  • Add the BlogDay tag using this link: http://technorati.com/tag/BlogDay2005 and a link to BlogDay web site at http://www.blogday.org

No, later.

So, are the end to end principle (pdf) and worse is better really the exact same idea? Both are kind of negative in tone. Both have MIT appearing as a character in their stories.

End to end isn’t a paper about the risks of agency, or middlement, but those issues are clearly in the background. Worse is better is more conscous of the social engineering that is going on as the designer makes choices about the shape of his system. Neither is particularly clear that the designer is crafting an option space, a search paridigm, a platform, or a standard. To my mind both have become deeply assocated, over time, with our culture’s romatic notions about the little guy, the rural, the entrepenur. Ideas that are currently appear in the role of “long tail.”

Price Fixing and Knowledge Pools

If you sell widgets you often have a choice about how to price them. You can fix their price or you can engage in differential pricing. Differential pricing, i.e. trying to charge customers more or less depending on how much value that customer thinks he will get from the product, has the benefit of increasing the number of customers you can reach. For example you can reach thrifty, poor, low usage customers. It has the deficit of raising transaction costs, for example some customers will spend additional time shopping for price. The more the buyer is aware that approximately the same goods are available at differing prices the more resource he will likely spend shopping. Note that any time the buyer spends shopping tends to imply a lack of trust. Lack of trust implies a risky market.

Standards are a way that industries can engage in collusion or cooperation (take your pick) to temper the risk in a market for it’s participants. Here is a nice example of that. The auto-insurance industry needs data to set insurance rates. Each company has claims data which gives it a rough picture of the risk of insuring a given demographic. The demographic data available to one company is limited to it’s current customers. A company that insures mostly elderly people in Florida will have good data for that demographic.

To improve the quality of the data they pool their data. The pool is managed by a non-profit organization setup by the firms in the industry. That data then becomes the standard estimate of the risk of insuring a given car for a given class of individual. Some of this data is available on the web..

All this reduces the risk for the auto-insurance industry and leaves them to compete on other attributes: customer service, marketing, in-house efficencies. It also reduces the chance that one company will give you a better price than another since it standardizes the measures used thru-out the industry for sizing up a customer prior to quoting him a price.

Notice how the data pool is very similar to a standards body.

It helps set standard prices.

The data pool both reduces transaction costs in a market, letting it run more efficiently and lowering risk. It tends to shift the pricing from differencial to fixed.

The data pool lowers the need for firms to merge. Without it the only way to get a large pool would be to merge. Stated another way the data pool provides a way for small firms to collaborate to gain knowledge that only large firms might otherwise aggregate.

I got to thinking about this because I was seeking other examples of collaborative knowledge pooling. I.e. other than open source, where the source code is the obivous reification of the pool. Other than a classic standards bodies, where you find patent pools.

Peering contracts, say between Internet ISPs, look like fourth example.

Affection and Power

Stanley Kober writes about emerging alliances that don’t include the United States.

In his classic work, The Prince, Machiavelli wrote “a prince ought to inspire fear in such a way that, if he does not win love, he avoids hatred; because he can endure very well being feared whilst he is not hated.”

An idea we can sum up in this silly B-school style drawing. The line illustrates a cartoon version of what the Bush Iraq strategy has done for America.

Or consider the example of Microsoft. They began weak and accumulated a huge pool of affection. At the same time they became powerful. They abused that power, as amply demonstrated by the anti-trust case. Affection transitioned to annomosity. Meanwhile the bloom of Internet innovation lead to a perception that they were weak, or at least not as powerful as previously thought.

Or consider Google. Their “Don’t be evil” motto could be viewed as a strategic necessity; i.e. stay out of the left side of that graph. They currently sit comfortably in the upper right. Powerful, and therefor feared; but at the same time held in great affection.

When you are both powerful and well liked you need to avoid the trap Microsoft fell into. You need to exercise your power in ways that sustain that affection. The temptation is to exercise your power in offensive manners, achieving a series of short term wins, while doing progressive damange to your good will. One pervasive temptation is to exercise your power in secret assuming that you can have your cake and eat it too. It is always a bit difficult for an institution to distinquish true affection from meer sucking up to because it is powerful. Both of these can make everything looks fine and dandy until the bottom falls out. In the Microsoft case is almost a worse case scenario. It woke up to discover that it’s audience/customers’ affection was more calculating than they realized and at the same time they were revealed be a pretty offensive lot.

Notice how terrorism plays out on this plane. Your typical terrorist act has very little direct impact on the strength of it’s target. Modern open economies are hard to weaken with random acts of violence. Terror can weaken the states they target in two ways though. They increase uncertainty, risk. That lowers investment, which weakens the economy. I suspect that’s their largest effect on the vertical axis. A carefully targeted terrorist act can, of course, do greater harm becasue open market societies tend to condense hubs that make easy targets; once you notice them.

The terrorist, of course, has no affection for his target. The initial effect of the act of terror is to create sympathy for the victums; i.e. it moves the dot to right on that chart. But there is a scenario where the act of terror moves the dot to the left. The terrorist believes that this opponent is evil and that those who hold him in affection are deluded. If only they could see the reality of the situation then their affection would dissapate. So part of the goal of an act of terror is to force the opponent to reveal his true colors.

The extremely strong entity is particularly likely to fall right into this trap. Again consider Microsoft. When the Internet came over the horizon it create, in due course, a sense of panic inside of Microsoft. They responded by exercising a lot more of thier market power. Those of us who had watched that market power distroy other parts of the ecology weren’t suprised when that exercise was quite offensive. But this time around it was publically revealed by the anti-trust case.

Artist Trading Cards

Artist trading cards are the size of a baseball card. Artists make them to trade with each other. Some are just amazing. The one at right is one of my wife‘s. It was sent to the other side of the planet. One of these, is coming back.

You know, this Internet thing is pretty neat.

update: HA! These are just like QSL cards. I bet old ham radio guys grumble “been there, done that” just as much as old Lisp guys.

ePOST

This is a very cool example of how peer to peer systems might displace centralized hubs.

Most email today is stored on centralized servers; i.e. hotmail, gmail, the firm’s email servers, etc. Mail, i.e. the post office, is one of the oldest centralized services. The network effects are strong and they tend toward creating a monopoly.

ePOST is a serverless peer to peer email system. Download it, backup your copy, fire it up, setup your email client, exchange mail. Later blow up your computer (this step is optional). Pull the backup, fire it up, and magic! All your mail reappears.

Your copy is part of a peer to peer swarm that is collaborating to exchange encrypted mail and store all the mail redundently amoung the members.

The hub is unnecessary. Hotmail? gmail? AOL? Uneccessary!

Now to be honest this system is clearly a first generation; a proof that these hub killers are possible.

Christenson’s last book on innovation put a model into my mind about how innovation proceeds. It’s kind of systolic. On one hand innovation proceeds by cobbling together out of parts at hand solutions to problems at hand. These solutions are, well, Rube Goldberg like. But they are cool because the solve problems that weren’t solved before; which makes them valuable. Time passes and these solutions are refined. And then, enough knowledge is accumulated that the modular boundries in the solutions become apparent. These modules are then broken out and fall out as peices. The peices can then engender another round of problem solving.

A system like ePOST feels like one of the highly integrated systems of the first phase. At the same time it’s designers are part of a storm of activity going on in the peer to peer community to find the modular boundries so the component parts can be distilled out.

If you pop the lid on ePost you find first, second, third drafts of a lot of these modules.

  • Peers self assign a place around a ring of integers; that ring is the swarm.
  • FreePastry – allows one peer to send messages the peer nearest any point on the ring.
  • PAST – allows distributed, reliable, key/value hash table lookups in the swarm.
  • POST – allows encrypted objects storage in the swarm along with encrypted user to user messaging.
  • ePOST – build mail on POST
  • Glacier – provides durable storage so that huge percentages of the swarm die you can still recover all your data.

There is the most marvalous amount of research going on around all these modules these days. ePOST is a beautiful example of how what is becoming possible. This work on peer to peer DNS lookups is another.

End to End is Back?

I got an Onion of the day calendar for Christmas and this gem from 2002 came up recently.

U.S. Middlemen Demand Protection From Being Cut Out

WASHINGTON, DC-Some 20,000 members of the Association of American Middlemen marched on the National Mall Monday, demanding protection from such out-cutting shopping options as online purchasing, factory-direct catalogs, and outlet malls. “Each year in this country, thousands of hard-working middlemen are cut out,” said Pete Hume, a Euclid, OH, waterbed retailer. “No one seems to care that our livelihood is being taken away from us.” Hume said the AAM is eager to work with legislators to find alternate means of passing the savings on to you.

The classic paper by Saltzerz, Reed, and Clark End-to-End Arguments in System Design which gave rise to the stupid network is principally about how to expend your design resources. It argues that your communication subsystem needn’t address a range of seemly necessary functions: bit recovery, encryption, message duplication, system crash recovery, delivery confirmation, etc. etc. This is a relief for the system designer, he can ship earlier.

The end-to-end principle drove a lot of design thinking for the Internet. For example DNS, the mapping of names to IP addresses, is layered above UDP, which is above IP. The end-to-end principle drove DNS up the stack like that.

The designer in the thrall of the end-to-end principle strives to leave problems unsolved. That makes it a kind of lazy evaluation technique. Leaving problems for later increases the chances the will get solved by the end users rather than by the system designer. It pushes the locus of problem solving toward the periphery. It creates option spaces for third party search, innovation, etc.

It is possible to look on the design principle as shifting risk. By leaving the problem resolution to later the design is relieved of the risk that he will screw it up. While users might prefer to have their problems solved by some central authority they do get a bundle of benefits if the problem is handed off to them. These benefits are otherside of the coin of agency risks.

The end-to-end principle is always about managing the risk associated with agency. The Internet’s designers were well aware that they were attempting to create a communication subsystem that would remain open, robust, and hard to capture. Those goals were complementary with designing a system that could survive in battle.

When ever you clear the fog around one of these communication or distribution networks you find a power-law distribution. I.e. you find hubs. I.e. you find middlemen. I.e. you discover the risks of agency.

I don’t think that the original designers of the Internet expected to see the concentration of power we see in Internet traffic, domain name service, email, instant messaging, etc. etc. Nor do I suspect they expected to see the concentration of power that the internet has triggered in the industries that are moving on top of it; i.e. a single auction hub, a handful of payment hubs, a single world wide VOIP hub, an handful of book distributors, a handful of music distributors, one browser, one server, etc. etc.

Nothing in the end to end principle actually frustrates that outcome. It argues that there are a collection of reasons why a middleman, i.e. designer of a distribution/communication cloud, might find it advantagous to limit what functions he preforms in his role as intermediary. It doesn’t argue that intermediaries shouldn’t exist. The middlemen in the Onion piece are not being displaced by other middlemen. Middlemen rarely disappear completely.

This is why it is an ongoing effort to keep the network open. While we have a bag of tricks for shifting problems toward the edges and out of the center we seem largely at sea about how to control the degree to which hubs condense on the layers above us.

Mozilla Corp.

Cool, the Mozilla Foundation has budded off a commercial taxible subsidiary. I agree with Karim. This is a very exciting development.

While we have seen numerous attempts by commercial firms to capture some of that Open Source magic. Most of these have come from people who’s motives are principally commercial. Now there is nothing wrong with those motives, but it tends to color their attempts. The motivations that serve the establishment and stewardship of a rich open commons tend to move progressively (sic) to the back burner.

It is difficult to create a hybrid in the space between these two very distinct ethical frameworks. It is not entirely clear if one even exists. What is clear though is that a lot of people from the commerical side are searching really hard to find one. I’m always happy to see search parties heading out from the nonprofit side of the space.

This is a particularly important one though.

My bemused characterization of the driving force for most open source start ups goes as follows: On the one hand we have free stuff! On the other hand we have rich CTO/CIOs! We will just stand in the middle and make money! It’s a plausible premise.

If you stick a firm into that gap there are a lot of other aspects to bridging between those two, it’s not just money. For example on the Open side you have a high value placed on the creation of a huge pool of options; while on the commerical side you have a high value placed on minimizing risk and maximizing predictablity. On the open side you have a enthusiasm for rapid release and adaptation. On the commercial side your required to synch up in tight lock step with the buying organization’s schedules. On the open side the evolution of the project is a continous negotiation among the projects particpants; a deep relationships. Participants are often locked-in. On the commercial side the relationships are kept at arms length with contracts, specifications. Buyers strive to commoditize markets with multiple vendors, avoiding lock-in. I could go on.

There is arguement to be made that the CTO/CIO side of these businesses should adapt. I have no doubt that over time they will. For example I suspec that CTOs will adapt before CIOs. But it is always hard to shift an installed base. It’s obviously hard when you dig into all the APIs of complex peice of software, like Microsoft Windows. But it even harder when you dig into the complex tissue of social webs. Changing the rules for how firms manage software isn’t easy. That’s why the CIO organizations will shift more slowly than the CTO organization; one has a much more complex social web to adapt. At minimum a much larger one.

But back to the reason why the Mozilla move strikes me as important. It’s not just that I’m glad to see experimentation comming out of the open side of things.

Firefox is key. Installed base on the client side is key. To reach large swaths of market share the Mozilla community needs to solve a consumer marketing problem. That includes finding the ways and means to move the product down the existing distribution channels. Thos channels are directly analagous to gaps between the open source community and the needs of the CTO/CIO software users.

It’s my hope that the Mozilla Corp. can enable them to leverage those channels.

Just to mix the two examples together. Consider how hard it is for a CIO to justify installing Firefox rather than IE given how extensible it is. While for a open source guy that extensiblity looks like oportunity for the CIO it looks like increased risk and hightened support costs. An open source guy thinks Grease Monkey is cool. It makes the guys in the IT department quake in their boots. A varient of Firefox that addresses their concerns is a no brainer. It gives the CIO access to the vibrant innovation around Firefox, but it allows him to limit the risks.

Exciting.

Phisher’s Trick

Most web login schemes work the same way. When the site becomes curious and wants to know more about the user it delegates that task to identity provider. The identity provider then authenticates the user and sends him back to the original site. Information about the user can then be sent via back channel of one kind or another. Lots of schemes exist for implementing these back channels: shared cookies, extra CGI parameters, full fledged TCP back channels.

Building a good identity service is delicate work. People screw it up because it’s hard. Criminals spend a lot of time working out schemes to trick the identity server into misbehaving.

Here’s a way to get it wrong: let untrusted parties bounce off your login server. In this example eBay’s the one that messed up. The problem? Phisher’s entice users to go to their eBay account an log in. Once log on the user is bounced over to the phisher’s web site. The user at that point thinks he’s talking to eBay; but he’s not. He’s talking to a bad guy.

The user thinks, when he logs into eBay, that he’s entering the world of eBay. This is similar to what in Liberty is call the circle of trust. As a rule of thumb identity servers shouldn’t provide services to unknown sites. The users presume that sites reached directly via the login server have some degree, no matter how slight, of trust among them.

It’s possible that an extremely lite weight identity system, like OpenID, could break this rule. The cost of conforming to the rule is two fold. It makes it harder to run an ID service, since you have to provision and maintain an account relationship with every site you service. More critical from a business model point of view – you raise the barrier to entry for additional sites adopting your service.

Personally I think ID services should always have an account relationship with the sites they serve. They can then hang a lot of useful junk off that account data; for example a list of URL patterns they are willing to bounce back to.

Every time you see data stuffed into a URL you obviously need to worry about what that might be revealing about the user which ought to be kept private. What’s less obvious is how every time you see an URL getting bucket brigaded along you need to puzzle out how the user’s trust model is changing as he travels. That URL can tap that trust. The first worry is about stealing bits of the user’s privacy. The second is about stealing bits of trust off the intermediaries.