Category Archives: business modeling

It’s not a Depression it’s a Disgust!

In this study[1]  the authors manipulated the emotions of their test subjects and then simulated a marketplace.  It is not surprising that your mood effects prices, both what your willing to pay and what your willing to accept.  They tested two emotions: sad and disgust.  Apparently a market should clear faster if everybody is sad.  The sad subjects lower prices for selling purposes while raising the price they are willing to pay.  Disgusted subject lower both.

A couple comments.

A severe economic recession is called a depression, but apparently it should be called a disgust.

I’m reminded that one of the theories of usury is that purpose of interest on borrowed money is to compensate the capitalist for the pleasures he is forgoing when he hands over the money, and in turn I am amused  by the idea that the usual macroeconomic prescription for a recession is to lower interest rates.  Presumably the intent is to make him sad.

I’ve been wondering if and when we will see the application of behavioral economics to macroeconomic problems.  Given the current recession, maybe we should prescribe a large does of sad?

Mostly you observe behavior economic research getting applied to sales and marketing.  No doubt evil legions are currently at work trying to puzzle out how to make shoppers sadder at the point of sale.

Hm, this would seem to explain why shopping malls make me so cranky.

[1]  Heart Strings and Purse Strings:  Carryover Effects of Emotions on Economic Decisions by  Jennifer S. Lerner, Deborah A. Small, and George Loewenstein Carnegie Mellon University  (pdf)

Market concentration in Web 2.0

A friend recently inquired:

… it says “Transferring data from www.google-analytics.com”. It has been sitting in that state now for minutes.

to which my immediate reaction was: “Oh, that’s Web 2.0.”

Web 2.0 is many things to many people. One of my favorites is that Web 2.0 is a vision for how the architecture of the internet operating system might shake out. In this vision there are many vendors who contribute services to the system and applications are built by picking among those services. I joke that in that world the only dominate player would be O’Reilly who’d naturally get to publish a book for every service. Doc writers rule!

A somewhat less general version of that strawman architecture of applications delivered by aggregate diverse vendor services looks only at the individual web page, and then the page is assembled by pulling content from diverse vendor services. In that variation the UI subsystem for the internet operation system is situated in the web browser (much as back in the day we thought it might be situated in an X terminal). UI designers know that latency is a must have feature.

There is a gotcha in the Web 2.0 architecture. When you assemble your application each additional supplier increase you risk. That’s called supplier risk. This is what my friend was observing. It used to be conventional wisdom that no same web site developer would let this kind of supplier risk into his design. That has turned out to be false, and I think it was always overstated to the point of being silly.

Internet systems built along the lines of my Web 2.0 sketch are a like just in time manufacturing; but, with the knob turned up to eleven. Supply chains sometimes fail catastrophically in a cascading failure. There is a wonderful example of that in the book about the auto industry The Machine that Changed the World. The story takes place in Detroit in the early 20th century. Before the story begins the auto industry’s supply chains are dense and somewhat equitable. Detroit has many small producers of assorted component parts. The producer of seats would come into work each morning to find his inputs sitting on his loading dock. He’d assemble his seats and deliver them on to the next guy. And then there was a recession. He comes in and his morning bucket of bolts is missing. His supplier has gone bankrupt. This failure cascaded and when it was over, when the recession ended, the auto industry was a lot less diverse.

There are days when I think it’s all about latency. And in this world each hick up creates drives us toward another round of consolidation. For example I think it’s safe to say the chances your suffer the hickup my friend observed are much reduced if you situate your site inside of Google’s data centers.

Well, so, thinking about my friend’s comment got me to wondering: How’s that Web 2.0 thing working out? Do we have any data on the depth and breadth of supply chain entanglement in the web application industry? Do we have any metrics? Can we see any trends. Ben Laurie has recently been looking at something similar (about DNS, about AS), the supplier risk he’s thinking about is what bad actors might do if they owned (pwn’d in Ben’s terms) one of the points of concentrated control. He’s got pretty pictures, but no metrics.

Here’s a possibility. I’ve been enjoying a firefox plugin Ghostery, which reveals how many “web bugs” or “behavioral marketing trackers” or what ever you want to call them are embedded in each page I visit. For example if you go to Paul Kedrosky’s awsome blog Infectious Greed there are ten (Google Analytics, Google Adsense, Lijit, Minit, Federated Media, Doubleclick, ShareThis, Sphere, and Insight Express). Ghostery isn’t quite doing what I wanted. It is surveying only a subset of universe of Web 2.0 services used in assembling a page. So it doesn’t report when the page is pulling in Yahoo maps or widgets from Flickr or Etsy. But it’s a start.

If opt in Ghostery will pipe what it learns from your browsing back into a survey of what’s happening across various pages. That includes, of course, a directory of all the services it’s keeping an eye on. For example here is the Ghostery directory page for Lijit which reveals a bit of what’s being accumulated, i.e. that Lijit was found on over a thousand sites by ghostery users who have opted in to reporting back what they are seeing.

So yesterday I hacked up a tiny bit of code to pull those counts from Ghostery’s directory so I could see what the tracker market is looking like.  (Note that the ghostery firefox plugin is open source, but as yet the server’s not.)  You can see the rankings of top trackers here. I presume they are powerlaw distributed. Organically grown unregulated market shares usually are. Even so, it is extremely concentrated with four of the top six positions are Google’s. Here’s the top handful:

800000 Google Analytics
300000 Google Adsense
200000 Doubleclick
70000 Statcounter
60000 AddThis
40000 Google Custom Search Engine
40000 Quantcast
30000 OpenAds
20000 Omniture
20000 WordPress Stats
20000 SiteMeter
10000 Revenue Science
10000 AdBrite
10000 Casale Media
10000 Twitter Badge
10000 MyBlogLog
10000 DiggThis
10000 Microsoft Atlas
10000 ShareThis
9000 NetRatings SiteCensus
9000 Google Widgets
9000 ValueClick Mediaplex
8000 AddtoAny

Wave – Part 1?

Wave is neat and I currently think it will be very widely adopted.  This note is quick summary of what it appears to be.  This is very impressionistic.  The  specifications  are amazingly rough!  I’m not sure I’d call this a platform but I’m sure other people will.  It certainly creates a large option space for building things.  Wave certain mets one of the requirements of a great platform; it opens up so many options for things to do that if you ask “What is it for?” the right answer is: “I don’t know.”  Or as a friend of mine once observed: a good platform is about nothing.  The demos at Google IO obscure that.  That always happens.  For example when they first demo’d the Macintosh they had build one or two spectacular application, MacPaint for example.  People would look at their demos and think: “Oh so this is a machine for doing drawings.”

Wave provides the tools to do realtime distributed coordination of a complex activity.  For example that activity might be a game of checkers, or developing a product plan, a conversation, the distribution of a todo list.  So Wave provides tools to solve the coordination problems that arise when you have a data structure distributed around and multiple parties all modifying it.  Wave adopts the same technique we use for source control,  optimistic  concurrency.  Everybody edits their local copy.  These edits may turn out to conflict with each other.  The resulting conflicts are resolved by some mechanism, which in the Wave terminology are given the math like name operational transforms.  In source control systems I’ve always called that conflict resolution.

A Wave document is said to consist of a set of wavelets which in turn contain one or more XML documents.  For example a Wave document representing a game might have wavelets for all the players, spectators, officials, the score board, game state, the moves, advertising, discussion threads, and individual comments, etc. etc.  Nothing in the Wave  specification  blocks out how all those wavelets manage to relate to each other.  Different activities will, of course have different kinds of constitute parts.  Nothing I’ve read yet specifies even the building blocks for things like users, bits of HTML text, etc.

But the spec does block out what the primitives are for editing the XML documents that constitute the atomic elements of the Wave.  Those operations are a small set of  editing  operations: move pointer, insert text, insert XML tag, split XML element.  It reminds you of using a text editor.

These are the operations which might give rise to conflict.  If Alice and Bob are working on the plan for a party and both change the budget for snacks those edits might both be represented by a series of operations (move to char 120, delete 5 characters, insert “25.00”); with Alice entering “25.00” and Bob entering “45.00”.  The protocol has a scheme for resolving this conflict.  It does not average the two!  It just picks one,  deterministically, and move on.

That’s about it.  But there are some entertaining bits piled on top that are fun, and necessary.  I’ll mention three of these: historical background, how this all gets federated, how access rights are managed.

Optimistic  concurrency  goes back at least into the 1970s, at least that’s the first time I saw it.  I think the first time I saw it used to for a realtime application with human users was drawing system out of Parc in the 1990s – and one of Google’s whitepapers on Wave mentions that.  These days there are two very nice applications that I’ve used to coordinate activities Subethaedit and Etherpad.  I highly recommend Etherpad to anybody who’s working on an agenda or meeting notes jointly with other people – it’s fun.

While it is possible to imagine implementing Wave entirely as a peer to peer system with no central coordination, Subethaedit actually does that.      Wave implementors are all going to have a server that labors on behalf of the users participating in the activity the Wave represents by storing the Wave document, and  orchestrating  the on going edits and naively resolving conflicts as they arise.  The plan is to allow a user’s wave server to collaborate with other such servers.  That works by having one server act as master for each wavelet.  It’s worth noting that every participant in a Wave document is not  necessarily  a participant in every wavelet of that document.  In our example game, two spectators at a game can have a private chat within the game’s Wave document.  To be responsive each server caches copies of the wavelets his users are participating in, and to for reasons of  thrift and privacy these are limited to just those.  The  authoritative  server is  responsible  for retaining the master copy of the wavelet and for resolving conflicts.  So every edit flows to this master and then back out to the other  participating  servers.  There is a bit of crypto complexity layered on on top of that to assure that that bad actors can’t  masquerade  as another participant.

It is very unclear at this point how access rights to waves are managed.  Obviously wavelets will have participating users.  The participation will be asserted along with some rights; for example the right to view but not the right to modify.  In addition there will be groups of users, and these too will be asserted as participating in a wavelet.  If that’s the case then determining if a user has the right to do something will involve searching the user and group assertions on the wavelet.  Remember above that a Wave Document consists of just a set of Wavelets.  Obviously for any given kind of Wave Document there will be more structure than just a set.  For example our spectators’ private conversation would consist of the conversation and then a thread of comments.  It is totally unclear if how access rights are  propagated  or computed across those structures.

Everything is amazingly raw at this point.  Which signals virgin territory.  It’s unclear how good a landlord Google will be, but no doubt a lot of people are going to go forth and attempt to stake some claims on this landscape.

Off to the Races?

The term “platform” misleads people.  The metaphor is flawed.  It suggests land, and it can be made to work, if you insist.  Accepting the metaphor then applications are built on the platform, like houses on the landscape.  I read recently a brief summary of why even if you set aside the housing bubble the cost of housing has risen in the US.  Two reasons: zoning and tax caps.  Zoning has made it ni-impossible to increase the density of the existing  metropolitan  areas.  Tax caps doubled down on the primary problem of public goods, under provisioning.  In the absence of public goods (schools, roads, security, environment, public health, …) individuals are forced to provide substitutes; and by definition these are higher cost and lower quality.  For the platform metaphor to work it’s critical to think not just about the  applications  it supports.  You need to dig into the governance; i.e. the costs, rules, and services provided.  That is an improvement and it does illuminate the question but it is not my preference.

There are at least three aspects of that metaphor that I find lacking.  You need a metaphor that gives equal weight to both sides of the equation.  The services a platform provides are just as important as the applications it enables.  You need a metaphor that gives more weight to life-cycle of platforms; that in each round we  experience  a race to see who will own the platform.  The platform as land metaphor is far to lead us to ‘pay no attention to the man behind the curtain.’  You need a metaphor that embraces how important the network effects are.  All these can be see thru the lens of each other; particularly in the early days of as a platform emerges.

Consider the current state of play.  Developers seek out fresh real estate to build on and these days they appear to be gathering in two regions: smart phones and cloud computers.  So there are two species of platforms, two competitive games in play, two industrial standards battles.  In the life cycle of these platforms both horse races are well out of the gate.  Apple and Amazon respectively have grabbed substantial early leads.

Picking the right metaphor helps to assure you stay focused the right things and that you have the right expectations.  For example some applications, payments for example, are probably destined to become key platform features.  That in turn informs the question of who’s in the game.  For example is eBay/Paypal a cloud computing OS waiting to happen?  I think so.  It also helps to explain why Google or Amazon have a payments offering.

We know to expect an operating system to provide a file-system and a GUI.  We know to expect that a local government will provide public schools and some regulation of the sewage.  So, presumably we should be forming expectations about what features a cloud computer offers, or a smart phone.  Here’s a nice long list for cloud computing.  Here’s a shorter list for smart phones.  When the  column  fodder charts are that messy you can be sure of a lot of condensation and turbulence ahead.

The early days in the life-cycle of a platform are interesting in part because where the lines are drawn is under discussion.  Things settle down during the midlife.  I can recall the heady early days of the Mac when every release of the OS brought with it new extremely exciting APIs.  But also how each of these APIs was actually prototyped by somebody else, often an application builder.  There is always a tension between what will be owned by the platform and what will be owned by those around him.  This is a bit like how some wags like to complain that the town’s public produce markets or schools competes unfairly with private enterprise.  Right now, for example, there is a firm that is dominate in the geo-location via wifi market and there are three clear ways that might go.  They might be rolled up into one of the platform players.  They might be displaced by an open substitute (based on say open street maps), or of course they might survive as a vendor.

There is one place where I seem to most often run into confusion caused by the platform as land metaphor and that is with websites that are playing the open API card.  The metaphor causes them to focus  primarily  on getting developers to adopt their API.  That’s an unalloyed good thing, not just because it is actionable.  But it tends to make them blind to the dynamics of the battle unfolding all around them.  For example; for various reasons a service offered inside of EC2 is  preferable  to a service offered outside, at minimum it will be more performance and the bandwidth costs will be zero.  So I suspect we will see a trend toward all firms offering a open API moving some or all their offering inside of EC2.  More generally, and presuming that real competitors to EC2 emerge, they will have to build the same kind of branch offices inside each EC2 competitor.  That in turn is exactly like the “Render unto Caesar the things which are Caesar’s” dynamic seen around older operating system platforms.  Where, for example, a hardware maker or application maker has to carefully assure his offerings are supported by the OS vendor.

Why Do We Pay Attention?

Why do we read those blogs, email, chats, twitter, voice mails, newspapers, magazines, etc. etc.  Presumably there is some logic to that.  Some motivational schema.  There’s money in the answer to this question.  Will my students pay attention?  Will my novel be a hit?  Will my newspaper survive?  So, surely this question has been extensively studied?  I can think of a few examples.  There are handbooks on teaching, writing, advertising that all look into the question.

Here is an another attempt, coming at this from the currently popular puzzle of what might stop the free-fall of newsprint and it’s codependents (i.e. investigative reporting, PR, local advertising, etc).  He blocks out four reasons why we expend resources to accumulate new information:

  • Entertainment – is everybody animated now?
  • Deciding – in a yellow wood?
  • Staying Expert – sort of a service contract model i guess
  • Paid To – diagnosing, trained,  flattery?

These are not independent.  For example, the author of a highly technical paper targeted at a community of experts will often include a significant amount of entertaining content since he knows that makes the material more memorable or more viral.  But one reason it’s clear these are disjoint categories is how when your goal is drawn from one category it can be  irritating  to have content from one of the others popping up.

I found it disconcerting and then amusing that what I’ve labeled “paid to” he named flattery.

I’m not particularly comfortable with this framework.  Why do fans pay attention?  But, it is fun to compare it various other schemes: story templates, selling scripts, etc.  For example in the  typical  fairy tale our hero is cast out of one’s home, goes on a quest, and then returns home.  That has all four elements.  For example when we are influenced by the use of social proof in a situation that has elements of deciding; but helps to highlight how there is a social aspect to all four.  When we tell a story by opening with a mystery to hook our readers, a standard bit of teaching advise which I used in this posting, then we are pulling on a few cords from all four.  And where does the phrase “breaking news” fit into that framework?

And what’s up with cliff hangers?  People do pay to have those resolved.  Did Ben find a job yet?  Tune in tomorrow!

based on The 4 reasons anybody ever consumes information…

Groups and Value

Thinking here about group forming and group forming networks; here’s one of those typical B-school 2D drawings:

Presuming we have solved both the problem of aggregating the group and extracting the value then points on that surface more valuable per size-of-group * value-of-member.  This is the calculation that any site with an audience makes, or any shop with regular customers, or any club.  The definition of value varies a lot.  A knitting group wants something different from it’s members than does a standards body.  The word authentic get’s tossed around to label the miss-match between what affiliate  marketing platforms (like Amazon, or Google ad-sense) value in site visitors v.s. what makes a site attract an authentic membership with some particular enthusiasm.  A lack of appreciation for how diverse of value is goes a long way toward explaining how dismissive people are of sites with narrow  enthusiasms.  People dismissed open source for years because they were blind to the values that attracted it’s participants, people are no less blind today even if they are less dismissive.  It really pull my cord to watch observers rapidly dismiss sites of other enthusiasms just because they can’t be bothered to puzzle out what might be the value those members (or the site operator) has managed to find in there.

It seems useful to be clear that value-of-member has at least four aspects.  There is the member’s value the members see in each other (a p2p network scoped by the group).  The value the members contribute to the common cause of the group (a sarnoff kind of value to the groups barn raising).  The value the site owner (or steward) values in his members (i.e. a site for lawyers wants the lawyers who are highly respected and well networked to participate).  The value that feeds clearly into value extraction (i.e. the lawyer site values those who click thru on the ads or regularly subscribe to premium services).  Value is messy.

Presumably the universe of groups, the population, is distributed on that chart such that most groups are down near the origin.  Again it frustrates me how people are dismissive of those.  [Apparently I’m easily peeved :)]  So they complain about the how github’s fuzz of forked projects is confusing, or how google code and source forge are cluttered with tiny projects; or how the numbers yahoo groups or ning has are inflated by groups with little or no traffic.

Which brings us to the question of aggregating groups.  We can visualize this as regions on that chart.  Consider the local Dayton Business Journal, it’s got an audience that is valuable in a particular way and when rolled up into a company like American City Business Journals (see the select city pulldown on one of their sites).  Or consider this set of  local newspapers  around Boston.  In both cases the set of groups aggregated is some range of distances from the origin the definition value-of-member axis has been narrowed down.  That narrowing is in part tied to the cost of rolling up the aggregate, which presumably involved negotiation and money.  Sourceforge is a different story.  They rolled up their groups organically which goes to explain why they have a lot of groups close to the origin.  Sourceforge’s value-of-member  definition  isn’t very broad spectrum.  But there are platforms where you see extremely broad spectrum value-of-member definitions.  Plenty of examples: Yahoo groups, Meetup, NingvBulletin, WikiSpaces or Acquia are all  examples.  I’ve often thought that Yahoo’s strategy was to roll up these kinds of companies; and it’s a puzzle why that aggregate hasn’t turned out to be more valuable.

Well, it’s all food for thought.

Etsy Sellers

I’ve been playing with the recently released API for Etsy, and here is a chart of questionable value.

Each dot represents a single shop on Etsy.  The vertical axis shows how many items they have sold.  The horizontal axis is a very rough estimate of their average price.

This is pretty bogus.  Some reasons why:  I drew my sample from a list of high volume shops;  If I had a list of high grossing shops I’d have gotten more points in the lower right.  The estimated average price by sampling current offerings.  E.g. shop at the lower right probably has an average selling price of around twelve dollars, in spite of having a number of two thousand dollar items in their shop.  The data is not per-year, but rather since the shop opened.

Lots of things come to mind.  For example, the cost structure of shop in the upper left (mostly fulfillment I assume) is quite different from those in the lower right.

That said it doesn’t seem implausable that there are a few hundred shops that gross more than fifty thousand a year at Etsy.  Maybe, there are a few who’s profits are above that level.

Demand the Surprises

While doing a bit of work helping The Echo Nest get their developer network rolling I got to observe an amazing outrageously cool example of what can happen when you open up your technology.

This bends one of my blogging rules: no blogging about the job.  This time it’s a consulting client.  But the gig is all done and I’m not revealing anything proprietary.

I treasure examples of why relinquishing control of your technology is a good move.  Because bewilderment is often the first reaction when I suggest it.    And then, most technology owners don’t seem to like the explanation, which seems straight forward to me.

My favorite answer for why this can work: searching for cool applications demands skills and attitudes that the firm lacks.  These are on the demand side.  They are close to the problem the user needs to solve.  I love this answer because it’s symmetric – scarcity on both sides.  The firm should not horde it’s options because the knowledge to act on those options is scare.

You can frame this answer as a search problem.  Searching the option space created by the new technology requires all the usual stuff: capital, talent, knowledge, an appetite for risk, and intimacy with a high value problem.  Delegating the search problem to the third parties works well because they bring increased knowledge, because they understand the problem being solved.  The firm only understands the technology being applied.  The developer in your developer network brings a heightened appetite to solve the problem, because it’s their problem.  It is perversely fun to note that the 3rd party will take risks the firm would never take; they might be small, foolish, impulsive, or very large and self-insured.

This isn’t the only workable model for a developer network (there are, just to mention three: commoditizing, standardizing, and lead generating models).

But if this is the model your using you can begin to set expectations.  A successful developer network must create surprises.  If the search created by the developer network  does not turn up some surprising applications of your technology it’s probably not working yet.

When it works the open invitation to use your technology creates a stream of surprises.  Expect to be bewildered. Curiously the somewhat bewildering decision to relinquish control, if successful, leads to yet more bewilderment.  But, surprise comes in many flavors.  You may be envious because the third party discovers some extremely profitable application of your tech, as Microsoft was when the spreadsheet and word process emerged in their developer network.  You maybe offended, as some of us in Apache were when violent or pornographic web sites emerged in the user base.  You maybe disappointed, as I was when the market research showed that most spreadsheets had no calculations in them.    You are often delighted as I suspect the iPhone folks were when this somebody invented this wind instrument based on blowing on phone’s microphone.

Dealing with the innovations created in the developer network can be quite distracting.  It’s in the nature, since the best of them take place outside the core skills of the firm.  That means that comprehending what they imply is hard.  Because of that I seem to have developed a reflect that treasured these WTF moments.

So, One aspect of managing a developer network is digesting the surprises.  To over simplify there are two things the developers bring to your network: a willingness to take risks, and domain expertise.  The first means that you often think, golly that’s seems rash, foolhardy, and irresponsible.

Consider an example.  It is very common to observe a developer building a truly horrible contraption.  They use bad tools, in stupid ways, even dangerous, ways.  And just as your thinking “oh dear” they get a big grin on contented face.  If that happens inside an engineering team you’d likely take the guy aside to discuss the importance of craftsmanship.  Or, if you’re a bit wiser, you might move him into sales engineering.  That kind of behavior is not bewildering; it’s a sign of somebody solving a problem, creating value.  Value today, not tomorrow.  It’s a sign of an intense need.  Now intense need is not enough to signal a high value product opportunity, for that you also want the need to be wide spread.  Once you get over yourself, and learn to appreciate the foolhardy, you can start to see that it is actually a good sign.

But developers don’t just bring a willingness to take risks. They can also bring scarce knowledge that you don’t have.  I love these because it’s like meeting somebody at a party who’s an expert in some esoteric art you know nothing about.  It’s a trip to a foreign country.  It’s the best kind of customer contact – they aren’t telling you about their problems they are revealing intimate information about how to deal with those problems.  Like travel to a foreign land it is, again, bewildering.

We caught one of these last fall at The Echo Nest.  I love it because it is so entirely off in left field.

The folks at The Echo Nest have pulled together a bundle of technology that knows a lot about the world of music.  They have given open access to portion of that technology in the form of a set of web APIs.  So they have a developer network.  They have breadth of music knowledge because their tools read everything on the web that people are saying about the world of music.  They have in depth understanding by virtue of software that listens to music and extracts rich descriptive features about individual pieces.  It is all cool.

The surprise?

Last fall Phipip Maynin (a Libertarian, a theoretical finance guy, a hedge fund manager, and one of the developers in The Echo Nest developer network) figured out how to use the APIs to guide stock market investments.  How bewildering it that!  He started with a time series – hit songs – and ran the music analysis software on that series.  He then gleaned out correlations between the features of those songs and market behavior.  He reports that work in this paper: Music and the Market: Song and Stock Volatility.

It is a perfect example of how the developers in your network bring unique talents to the party.  I doubt that anybody at The Echo Nest would have thought of it.

I often get asked were the money is in giving away your technology in some semi-open system.  The question presumes that hording the options the technology creates is the safest way to milk the value out them.    If you start from that presumption it’s a long march to see that other approaches might generate value.  What I love about this example is how it is a delightful value counter point to the greed implicit in that hording instinct.  What’s a more pure value generator than market trading scheme?

Of course now I’m curious: anybody got any examples of trading schemes based on the iPhones, Facebook, or Romba platforms?

Sense of Scale

One thing I’m finding particularly hard is getting a sense of scale for what’s unfolding in the markets, but here’s a run at the question.

Katrina was maybe a $80 Billion event, and it looks to me like Ike is probably a $30 Billion  one.  The Iraq war’s direct costs are probably 2 Billion a week.  The 9/11 attacks destroyed about $16 Billion in physical assets, and the clean up cost about $11 billion; lots more in the ripple effects.

Microsoft, Apple’s, and Google’s market cap are roughly $230, $119, and $140.

AGI’s market cap a year ago was around $200 Billion, today it’s around 10.  A year ago Fannie Mae, Lehman Brothers had market caps of was around $60 and $40 Billion each.  That’s $300 billion total, about twice the size of the numbers in the 2nd paragraph.

These are big storms. With about 100 million households in the US, $300 Billion is three thousand dollars each.

These estimates are all aweful rough.  I couldn’t quickly find estimates for the wealth distruction in the house markets, but here are a lot of write downs.  Nate Owan takes a run at a similar question.