Cloud Pricing Trends – part 3

Antonio notices the curious disconnect between Moore’s law and cloud pricing.  I don’t know that I have anything more to say today than I said in my previous two postings (1, 2).  What’s clear is that these vendors do not feel any as much pricing pressure as a naive analysis would suggest.  Apparently the switching costs are too high and the incentives too weak.  My original presumption was that these new operation systems would provide access to unique resources; i.e. Amazon’s distribution and shoppers, Google’s web caches, knowledge bases, and searchers.  Along with what ever sticky unique APIs they could throw up.  Currently only Facebook is doing much with their unique assets.

I guess the question remains.  Is EC2’s pricing telling us something deep about the pricing power, or are they caught in a classic boiling the frog situation when the signals the market is giving them are too weak to trigger a reaction.  Hirshman tells a story about how the US automakers failed to hear customer quality complaints because the complainers just rotated around between them; presumably hearing such signals would be even harder if the market was growing as fast as the cloud markets are.

What I didn’t comprehend until recently is how much these systems are about impulse.  Not just that you can build something impulsively, but also that you gains some assurance that you will be able to reap the value to be had when the unpredictable impulse of usage hits.  The larger the proportion of activity which is slotted into either of those categories the more these systems dominate.  I’d guess that proportion is over 80% of all activity.

I wonder if switching (incumbered as it is with huge costs of many kinds for multi-homing) is actually deeply at odds with impulsive.  Certainly when you build something rapidly, on impulse, you don’t front load a large prep to enable later switching.  And, certainly when your between load spikes there is only a weak incentive to reduce the cost that will be incurred when that spike happens, if it every does.

I’ve been musing recently that their are, presumably, a class of business models that work because they catch passing load spikes.  Businesses designed to have very low run rates until the earthquake, blizzard, hurricane, power grid failure, commodity price spike, fad, conference, passes thru.  Such are naturally complementary to the cloud computers.  It would be fun to be inside Amazon, where it might be possible to collect a large set of examples.

But high minded theories aside – I think it’s mostly switching costs are much higher than people want to admit.  I can’t even get around to updating all the damn blog software.

1 thought on “Cloud Pricing Trends – part 3

  1. Edward Vielmetti

    Good point about impulsive purchases and the cost of dealing with load spikes.

    If you look at the typical engineering problem, the difficulty is dealing with the load at 99th percentile, whether that be load on a web site or the design of a highway or a parking structure. If the system that you have bought into has an easy way of dealing with the one day a year you randomly need 10x or 100x your baseline load, then the true switching cost involves not just migrating to the new site but also testing its ability to deal with peaks, and you might not have any way to test that.

    (I am sure there are plenty of cities that would be happy to have parking structures that never filled up in exchange for paying 2x or 3x more per hour for car parking.)

Leave a Reply

Your email address will not be published. Required fields are marked *