Monthly Archives: August 2008

Chart Room

I skimmed the book Control through Communication.  The book is about firm size.  As we learned to manage information and communication at larger and larger scales the size of firms grew.  Which is all fine, but the aspect of the book I enjoyed was the schemes and gadgets.  Mostly the gadgets.  At one point the boss managed his communication with grid of pigeon holes, one for each day.  This scheme evolved; getting larger and larger until you could buy single unfolding desks, the size of a modern SUV, with hundreds of pigeon holes.

This picture shows the Dupont executive chart room.  Using charts to illustrate the status of the firm’s many operation was, at some point in time, an innovation.    About the same time Dupont started to use crude modern financial modeling, ROI and such.  Somebody at Dupont was sufficently enthusiastic about charts to have this marvalous contraption built.

The executives would sit in those chairs shown in the foreground, and as each portion of the operations came up for review their assistants would slide forth the handful of charts illustrating what’s happening with that.  It reminds me of a dry cleaners.

On theme in the book is how the meer idea that information should flow up and down the corporate hierarchy was innovative.  And then that these communications would become formalized was yet another.  I don’t know if the word innovation, rather than say enivitable, is the right term.  The book disappointed me, but then I only skimmed it, beacuse it doesn’t really engage with the two questions I’m interested in.

First how did firms manage the puzzle of how to negotiate out what should be communicated and how hierarchtical that communication ought to be.  It was interesting to realized that the the practice of highly hierarchtical communication may well have emerge almost entirely because that was what the technology could support.  The complexity of printing and it’s scaling characteristics meant that the central office could execute communication acts that the periphery could not.  If so the xerox machine must have created a bloom of lateral communication.

Second how much did this wipe out smaller businesses.  Was this really a major driver to the condensation where single large firms displaced smaller firms.  The roll up that happened in the telephone industry reprised in multiple other industries; but it’s a provocative thought that it happened for almost identical reasons: communication based network effects.

These two questions interact in some cases.  When railroads merged the operational rules of the dominate player would have to displace those of the weaker one, and in many case the stronger one had more effective communication and control schemes.

Looking at that picture I’m reminded of a reporting framework that a new senior manager once deployed into a firm I was working at.  The frame work was extremely standardized, N slides, M panels in each slide, with rules about what was expected in each slot.  It wasn’t a bad scheme, and it certainly made his job much more efficent.  Which was good because it allowed him to cut thru a lot of middle management and see deeper into the organization.  But yet, it tended to eliminate from discussion anything that didn’t fit.  If the slide lacked a slot, say for employee morale, then an issue was invisble.  To exagerate it was as if he was looking at the company thru a straw.  I’d say it created a puzzle for those who wanted to signal something up stream.  You had to get it into the straw’s narrow view.  I suspect his scheme was the direct decendent of that chart room.

Blow Up Rich

I’ve been trying to think about the financial structures around processes that exhibit highly skewed distributions.  The insurance industry is a great place to find the examples.  We buy insurance to hedge against the small but awful.  Most of our houses don’t burn down, but it does happen.  The chance of a fire is scale free, the insurance company protects it’s clients at the scale they care about, but who protects the insurance company against the rare event the burns down the entire town.  There are three ways the insurance industry handles that scenario: they don’t cover it (excluding acts of god for example), they reinsure into a yet larger pool, or they avoid it by not insursing in certain venues. Over here at Bronte Capital is a posting arguing that Warren Buffet, who moved into the insurance industry in a big way over the last few years, has been working this third angle.

When process with a highly skewed distribution delivers it’s rare but powerful shock into the system, it’s black swans, everything designed to work with the median shocks is blows up.  I’d be interested to know how the insurance industry handled the New England hurricane of 1938.  I’d be interested to know how the insurance industry in Thailand handled the AID’s epidemic.

Another place I’ve been musing about exceptional, but inevitable, events is where you situate your career planning.  I’ve a friend who likes to say that almost all the people he knows who made a fortune in their life “fell off a log into a pile of money” thru no special merit of their own except in some cases they consciously picked a good log to sit on.  On the other hand a lot of people just fall off a log sooner or latter.  It would be nice if, as you plan your career, you had a better sense of what the chances are in the trade you pick, in the economy at large.  The fetish people have for presuming that career path probablities are entirely a matter of personal merit seem wreckless.  I was quite impressed when an acquantance of mine with a degree in biology explained he was moving into lawyering because, well he didn’t put it this way, the climate was more predictable.

Recently I’ve been trying to explain how wily US cell phone pricing is.  They sell monthly plans with N minutes and then when the exceptional crisis comes down the pike, you fall in love example, they charge you huge over charges.  The typical plan delivers minutes at about five cents each and forty cents a minute.  Better, at least for them, is that as little crissis come and go your start changing your plan to buy more minutes, which in the absense of a crissis you don’t use.  That in turn raises the real cost of even you noncrisis minutes.  It’s a very impressive pricing scam isn’t it!  I recomend prepaid (t-mobile for gsm, pageplus for cdma on verizon).

If we ignore prepaid cell phone service, the cell phone contracts with a bundle of minutes every month are a bit like lousy insurance policies. You buy the option to use five hundred minutes, not because you need them, but because your insuring against the risk that you’ll run over and get stuck with the over charges.  That’s great, and I mean that sarcasticly, they are selling you insurance against a risk they created.

It amuses me to wonder what would happen if everybody in the country could be coordinated into using all those free minutes one month.  I very much doubt the phone companies can fufill that promise.

The options contracts implicit in those monthly cell phone contracts are analogous to the insurance pools.  If we could coordinate the month of the phone it would be the analogous to a hurricane or a plague, at least from the point of view of the phone company.

That scenario has been playing out with the internet service providers, at least for the incompetent ones.  For example Comcast sells me a package with certain assurances about what bandwidth I get into the Internet.  Unsurprisingly the consumption patterns of their customers is highly skewed, and I’m one of the higher users since this site runs over that connection.  Inspite of 20 plus years of history showing that Internet consumption grows extremely fast and quickly grows to fill the pipe provided Comcast was suprised when more users actually exercised the option they had bought.  It is not relevant what these users are doing with the bandwidth (P2P, video, voice over IP, spam) because if it hadn’t been one of those it would have been something else.

This last example, the ISP’s problems, is not actually an example of pricing design in the face of a highly skewed distribution.  It just looks like one at first blush.  The real problem the ISPs face is the rapidly rising tide of usage.  They thought they had a slower growing usage situation, something more like what is seen with the cell phones, but they were wrong.  When they discovered some of the users were consuming all the bandwidth they thought they had purchased the ISPs presumed those users were little trouble makers rather than early movers.    But that’s a mistake, soon everybody will consume all the bandwidth they can get.

Bluetooth PAN

WANs are wide area networks, like the internet, and LANs are, local area networks, like the wifi in your house.  PANs, personal area networks, are – i thought – a joke.  Presumably each Borg has a PAN so his headset, pda, cell phone, and ankle bracelet can talk to each other and when he sits down in his car the engine, radio, gps all join in.

So imagine my surprise when this weekend I found I was creating a PAN using Bluetooth.  Bluetooth is a standard full of promise which seems to specialize in delivering a frustration.  Two things lead to that frustration.  First there is lots of bad hardware.  Chips that don’t work very well, headsets that sound awful for example or software stacks that are buggy.  Second the standard has volumes of optional bits and peices; so usually it turns out the two devices you want to talk to each other don’t happen to support the necessary bits.  Sometimes that’s intentional, for example you can’t use your phone as a handset to talk to your computer since that would let your route around the cell phone company using voice IP.  The structure of the Bluetooth standards with all those optional bits and peices is typical of telco standards.

The fun I had this weekend was discovering that this phone I got and some of my Macs support Bluetooth PAN, one of those optional bits.  Using this it was trivial to let my Mac talk connect to the internet connection that the phone provides.  One, two, three: Do the usual bluetooth pairing, select connect to network from the bluetooth menu, oh … there is no step three.

In theory Bluetooth PAN supports multiple devices sharing the internet connection, but apparently my phone doesn’t do that.

This worked trivially on a MacBook Pro running Leopard.  But sadly it does not work on the MacBook Air – which in a typical Bluetooth user experiance – pretends to support Bluetooth PAN but it is unusable slow.  So for that machine I’m forced to switch back to more traditional Bluetooth DUN (dialup networking – a simulation of the dialup modems of my childhood).

I got into all this because I’ve been wanting to try AT&T’s $20 a month “unlimited” prepaid internet.  Right now you can buy a Z750a for $60 from their prepaid store – and if you poke around you can find online sites that will send you a rebate (after 90 days) of $25 or $30.  If you pop in $100 then the phone is good for year (buy from CallingMart with a coupon). I don’t intend to make any phone calls so that’s five months of internet access.    The Z750a supports Bluetooth PAN, DUN, and it can be a remote control for you Mac (I recommend declining all the options you don’t need).  If you buy the USB cable it is faster (800/300kpbs down/up) but the G3 HSDPA over bluetooth is pretty nice (400/30) as it is.  The USB cable appears to charge the phone as well.

This seems like a great solution for getting pretty good broadband into the home at a reasonable price.  A mac can share a connection to WIFI for example.

If you find your reduced to using BT DUN rather than PAN then you need to setup the modem’s dialing setup … I used this setup: dial: *99***1#, Username: WAP@CINGULARGPRS.COM, Password: CINGULAR1, APN: <leave this blank!>, CID: 1.  It picked the right modem dialing script automaticlly.

I’m enjoying having broadband pretty much everywhere I go.  The phone just sits in my bag.  ATT isn’t everywhere (click on data).

Evernote

Evernote is a cool idea.  They want to enable you to keep copy of all those random notes.  In this example I drew a quick sketch on an index card (on this topic).  I held it up to the little video camera on my Macbook and took this picture.

Then I dragged the resulting image into their application, and sychronized to their server.  Less than a minute later I syncronized again and they had run a bit of hand writting recognition over my sketch.  Not all, but some of the text was now searchable.

It works pretty well for business cards too.  Fun.

Energy per passenger-mile

From Chapter 2 of the “Transportation Energy Databook” from the US department of Energy (with slight format changes).  The Commercial air numbers maybe high do to freeloading cargo.


Thousand BTU/Passenger Mile
4.2 Public Transit Buses
3.9 Personal trucks
3.4 Cars
3.3 Commercial Air
3.0 Rail - commuter
2.8 Rail - transit
2.6 Rail - intercity
2.2 Motorcylces
1.3 Vanpool

My motorcycle/scooter riding friends may continue to gloat. And no this is not an excuse to to drive rather than take the bus, since adding you to a bus costs about zero btu/mile; and of course motorcyclists who ride in packs should get a car.

Gridlock Economy

Michael Heller’s new book looks interesting.  Heller was, for the last decade, been working to introduce a bit of balance into the discussion down stream from the idea that goes by the name “Tragedy of the Commons.”  He originally called his idea “The Tragedy of the Anticommons.”  Those who public goods coming to tragic ends often prescribe a dose of property rights.  Heller is interested in situations where too many property rights create grid lock.

His Authors@Google talk is a good introduction.  About a half hour long it touchs on various coordination failures with substantial social costs that arise from an abundance of property rights: Drugs that don’t get developed, families displaced from their legacies, urban development frustrated, air traffic congestion, foul ups in the post soviet privization programs.  Good stuff, and he is reasonably straight forward about how societies should be more aware about the balance they strike when they architect their property rights schemes.

That last point is of particular interest to me, since it goes to the question of how you shape the power law curves.  Is the single property owner who frustrates the urban developer the hero of the long tail; or is he just the worse case of ground cover strangling urban vitality?  Guess I’ll need to read the book.

I’m bemused, or confused, by the realization that both these tragedies arise because some coordination problem blows up when too many parties have simultaniously have rights.  I guess you might say the anticommons goes down the tubes when one player says no (or more often just lies silent) while the commons blow up when too many people say yes.  After a bit I can’t see these as really different, it’s back to the group forming coordination problem.

Billmon

Billmon used to write on politics at his blog the Wiskey Bar.  He was brilliant.  Like many of the bright angry lights he burnt out; and moved on.  Sad for we who read him, but hopefully good for his mental health.  He has now reappeared with a diary at Daily Klos, which is nice for us.  I hope it’s good for him too.

In his most recent posting he digs into exactly how bloody miserable the hole we find ourselves in with situation in Georgia.

First, recall that NATO is first and foremost a mutual defense pack.  No firewall – if a member is attacked the rest of us go to their aid.  Mutual defense packs are what caused the first world war.  NATO’s reason was to create a clear bright line the Russians would be unlikely to cross.  It is a vistigial organ of that strategic nightmare that seems to have workd – mutually assured distruction.  Joining Nato is not like joining the EU, or the Rotary club.  Each member gets a hand on the cord which can plunge us into that nightmare.

We have been expanding NATO into the former Soviet sphere.  Think about that.

For example Poland joined NATO recently.  The process for this expansion appears to be, well, cavalier.  First the Congress, quite casually, passes a law that allows us to treat a nation as if we have a mutual defense pack with them – but without the pack and without the promise to come to their aid if their attacked.  Mutual defense light – all the arms trading, military bases, joint operations, etc. but yet a bit of the fire wall is retained.

Then if that works out they join NATO.  Apparently at no point in the process do we take to heart what the nature of the promise being made is.  If Slovakia happens to have a disputed border with one of their neighbors that gets heated, we’re there for them.  That’s insane.

If you don’t care what the other guy thinks, and your confident you can get away with this kind of pushing out of our sphere of influence then I guess this makes sense.  But Russia’s not toothless, or are they stupid.  At minimum they hold Europe’s energy supply hostage, at worse they have nukes.  And never ever forget: nationalism always plays well at home.

So having played this process out with countries like Poland, Hungary and others we decided to this with Georgia and Ukraine.  Bleck.

Well I guess it’s nice that Billmon is back.

Parallax

David Huynh and I worked together at in the Simile project (simile.mit) for the past few years.  We burnt through our funding and many of us have moved on; he moved to MetaWeb.  MetaWeb makes the horribly named Freebase.  Freebase is this era’s version of AI knowledge representation (think frames).    People these days tend to mumble Semantic Web, Wikipedia, and Social as well when they talk about these ideas.

At Simile we were more focused on more formal collections, like library catalogs, and so our data sets tended to be more homogeneous than the sloppy mess you get in collections that emerge in open socially contexts like Wikipedia.  We were often stuck with collections with a limited assortment of item kinds (books, authors, subjects followed by a short tail … satellite photos … audio tapes …).    Things get more “interesting” as the collections become more unruly.

David has a beautiful new demonstration of some the ideas, often his ideas, about how to work on one of the tough problems in this space.  What’s refreshing about David’s work is he’s found a portion of the problem space that isn’t cluttered with other people and prior work.  Most of the Semantic Web work suffers from stepping on the toes of prior work – or worse ignoring – the excellent work done on knowledge representation over the last 50+ years.  What David’s found is fresh turf to work on in the problem of how to allow users, meer mortals, to browse and search on top of these knowledge bases.  That’s fresh because the AI crowd have always been so fixated on the singularity, rather than on helping people.  Not a very social crowd the AI guys.

Before I start musing about what David’s demo shows, to me, you really have to watch the video.  The rest of this will make little sense without that.

The idea here is that if you have done a search resulting in a set of objects you should be able to use that set to drive the next step in the search.    In a sense this is the classic unix pipe idea refreshed, where each linkage in the pipeline passes a set to the following step.  Or if your of a more mathematical mindset then each transform in the pipeline is a many to many mapping from items to items.

One of our problems in making these ideas work when we were at MIT was the lack of a sufficiently heterogenous universe of items; you really need a vast universe with lots of item types.    His cities/buildings/architects example, or his presidents/offspring/schools examples helps to demonstrate that.  What Freebase with it’s aggressive rolling up of things like Wikipedia provides is a step toward that rich universe of items.

If your user is very sophisticated then a command line UI like unix pipelines would be sufficient.  You’d then give him various operators for mapping item sets into other item sets.  He, being sophisticated, would then demand the ability to code up his own mappings (e.g within 20 miles/years/generations/links etc. etc.); and to annotate the items as he is working with (e.g. scoring and statistics).  And needless to say he’d want the pipelines to persist and generate feeds for other pipelines of various kinds.

The problem become much harder if you want to draw in the non-sophisticated meer mortals.  What faceted browsing does, and what you can see here repurposed, is it prompts the user with a reasonably clear signal as to what his next options are for moving forward in the search space.      So if your shopping for lawn mowers a facet’d browsing UI can prompt with price categories, power sources, etc. to draw the user into the next step of the search.  This is all well and good until you meet real world searches where the number of facets are huge and the number of options in each of them is even larger.  Even if you pick something as dull as library books the number of facets runs up into the thousands.    Facet browsing actually works best when the goal is to drive the user toward down the path to a single choice – i.e. shopping.  No need to offer the shoe shopper a facet for labor practices used during manufacture, or materials used in assembly, or nation involved in the manufacturing.

David’s sliding plays the same card to help the user.  He throws up a set of options for the user’s next moves.  Note how the facet browsing UI is on the left, and the sliding is on the right.  Note also how rich and there for difficult to present the options are for where to slide next.  It’s admirable that the video doesn’t gloss over that – as illustrated by the example where he slides from offspring of presidents into educational institutions they attended.

I guess this note comes off sounding a bit cranky.  Hopefully it won’t be taken that way.  The problem is very hard and real.  Search of this kind is going to common for some class of users in the days to come.  The more progress that can be made on making the UI accessible to mortals the more widespread it will be.  The brilliance of David (and Stefano’s, and David Karger) strategy in working thru this problem has been to ground their search for solutions in demonstrations that draw in actual users and work on actual pools of data.  It keeps them honest and it creates feedback loops they desperately need.

Oxygen?

Maybe somebody can explain what’s actually going on here.  MIT has a press release out, it crows about the discovery of a catalyst for splitting water (the pod cast there is good) into it’s constituent hydrogen and oxygen.  Now I thought that was pretty simple stuff.  What’s the hard part?

My model of using hydrogen as an energy storage system was that it became inconvenient in other parts of the system.  Hydrogen is a pain to store; transport, etc.  The conversion of hydrogen back to energy required over engineered devices; like fuel cells.  I’d presumed that making hydrogen wasn’t a problem.

In any case they formed a thin film of cobalt and phosphate on a conductive glass.  When they slip the water, using electricity, they get oxygen.  I gather that what’s good is that they get the oxygen directly and not thru some inconvient intermediary?  Apparently their is a second step that helps to assure they get the hydrogen more directly as well, and I’m not following that either.  This second step while straight forward typically is done using platinum as a catalyst; so there is some separate story about how to work around that cost – but that’s not part of this invention.

Apparently it’s very nice that this all happens at room temperature and neutral PH.  I guess that’s good, presuming that the existing efficient processes are far from that.

There is apparently a subplot about how the water isn’t pure; but has a dose of phospate in it so the thinfilm is self repairing.

I guess these are my questions.  First how much does this simplify existing practice.  Second is this actually significanlty more efficent than existing practice.  And finally, and I think this is outside the scope of the press release, what is the round trip efficency and complexity of an energy storage scheme based on this?

Update: This posting over at The Oil Drum is quite snarky and dismissive of this “breakthru.”  It is a refresing counter point to the ripple of republished MIT PR.  Presuming it is correct then what’s different is that existing high efficency electrolisis schemes are more complex than this.  How much that effects the capital costs isn’t clear to me, but not much looks likely.

Update: This video is pretty nice and reasonably clear.

“Systems dump excess energy in the form of structure.”

Tim Oren relates this aphorism “Systems dump excess energy in the form of structure.”  He credits James Burke. Delightful.

If true it goes a long way toward explaining why large firms, who almost by definition succeed in capturing proportionally larger rents, become less adaptable.  Comfortable abundance gives you the luxury for polishing all your policies and procedures, and pretty soon “they’re being pursued by a snail and yet they cannot get away! ‘The snail! The snail!’, they cry. ‘How can we possibly escape!?.”