Category Archives: standards

What a mess!

One of the many functions that standards (and open source) play is that they provide a forum for cutting through patent thickets so that an industry can grow.  But it’s hard.  Players that stay out of the standards making process, but who are active in the industry, are well positioned to poison the well.  Of course you do that after the standard is widely deployed.  The only defense is to acquire patents in the spaces around the standard.  That can be done defensively by the supporters of the standard, or offensively.  There should be a term of art for this tragedy – where in the attempt to cut a path through the thicket triggers the sudden growth in the  surrounding  thicket.

Recently I’ve been quite interested in an emerging standard, AMQP (a high speed enterprise message bus). AMQP lacks, as far as I can tell, any association with a mature standard’s body.  There is an AMQP project at the Apache Software Foundation, but that is not a formal standard’s body.  The AMQP working group’s governance is a bit ad hoc; but they do have an stated IP policy[1].

Today I learn that RedHat has at least one patent application in this space (patent #20090063418).  This has got the makings of a shit storm.  For quite a few reasons.  RedHat is a key member of the adhoc standards group working on AMQP.  RedHat is a members on the Apache Software project tied to the AMQP ‘standard.’  RedHat donated much of the code to that project.

I assume their donation to the ASF is part of a very typical open source busines model.  One encouraged by the Apache licence; i.e. there is a Apache licenced open source product that lots of people adopt for zero licensing costs.  That creates a market and then vendors offer enhanced versions of on top of that.  Redhat has a very extensive offering in the AMQP space.

Standards create opportunities to do stuff.  These opportunities may well be patent worthy.  So if you want to grow out the thicket around the emerging standard you just lock some smart guys in a room and start them brain storming.  Some of what they come up with will be obvious, but that hardly means you won’t be able to capture a patent for it.    Just to add to fire to the shit storm it appears that Redhat’s patent is for the mind bogglingly obvious idea of transfering XML data over AMQP.  Of course any patent worth it’s lawyering starts with some broad claim and then get’s more focused.

Some other links:

  • Good posting, more pissed than mine.
  • Red Hat – a statement on this issue.  This serious bull:  “Although there have been some recent questions about one of our patent applications relating to the AMQP specification, they appear to originate in an attempt to spread FUD. There’s no reasonable, objective basis for controversy.”
  • Well written note from one of the other firms implementing AMQP.  “… We are however very annoyed about the Fear, Uncertainty and Doubt that  actions like this cause.  We are astonished…”

[1]: I’m happy to report that the AMQP working group’s IP Policy is not as ad hoc as I earlier guessed.  See here: svn link, but you may need to poke your way thru some ssl cert complaints.

New search engine makes you look fat

Brett Porter wrote up a nice summary of his first impressions of that new search engine, Cuil.  He had exactly the same  experience  I had, and my wife had.  You ego surf only to discover they have a very odd model of your internet presence.  Feeling disappointed you then wander off.  We learn from this is that any new search engine better make us all appear even more above average than we already do.

Kleptocracy

Having recovered the lamp the genie offers you a wish.  Having rescued his daughter the king grants you a wish.  Having suffered the tortures of the damn’d you can slip anything you want into the final draft of the industrial standard.  What do you ask for?  During his short time at Harvard I gather that Bill Gates mused that it would be nice to own the traffic lights.  I call that owning a standard.  We oft see patent trolls pop up after a standard builds it’s installed base claiming to own the right to tax the traffic.  Gates built his monopoly the old fashion way.  That is somewhat more ethical.

Patents are only one way that the king can grant his minion’s wishes.  The crude example: no-bid contracts are good, and why bother to audit them?    Clever wishing runs along the lines of Bill Gate’s thinking.  Own a piece of the transaction flow.  Find a place with a high traffic standard and get the option to take a piece of the action.  And so I’m very impressed by what Eaton industries managed to pull off in the UK.  They got proprietary ownership of a new light bulb socket, it’s written right into the building code.  The socket’s aren’t that expensive, but the bulbs that go into them!  Twenty dollars a bulb in excess profits!  And you have to admire the crust of greenwash (the intent of the standard was to increase the use of high efficency bulbs).

Institutional failures like this should be fixed.  There should be ways to taking the wish back.  While we wait for that, daring individuals can hack their sockets.

What you say?

I believe it was Ray Kurzweil, circa 1989, who advised encouraging a private jargon inside your new company.  I remember because that was just about the time I was starting to think open would totally trump closed in our industry.  The advise seemed to my ears a bit old fashion.  But at the same time I suspected he meant that it was a good way to tighten the bonds inside the team.  By then I knew enough about cults to recognise that’s common inside cults; it supports two of the keys to running a good cult – information control, and thought stopping processes.

But don’t take any of that too seriously.  It is good advise none the less.  I’ll admit to being a Whorfarian, the words you highlight effect your thinking.

These days I’d add to all that though.  Language is the ultimate exchange standard.  So when you decide to innovate a new private language your cutting your self off and creating friction or trade barriers with your outside partners.    Importantly the advantage a new group has is that they can pick and choose what to emphasis.  They can take a run at leveraging some particular competitive advantage.  As Dave Winer says: You can’t win by zigging when he zigs. You have to zag to beat him.  Ray’s advise can be viewed as a bit of implementation advise for that.

So it was with some interest that I saw Google revealing their in house standard for serailizing data.  It’s not hard to see that Protocol Buffers are alternative to XML.  And it is amusing to, at least to me, to think that they did this in the hope of reducing the frictions that occur when they must translate from their in house argot into the dialects used by the outside world.  It’s fun to note that if your start up is as successful as Google you get to promulgate your private jargon.  It is one of the spoils of war.  You can push that friction into your compliments, make them pay the switching costs.

Protocol buffers aren’t anything special: messages are lists of key value pairs, keys can repeat, a small set of value types; including unicode strings and recursively other messages. They are very practical, close to the metal.  Choices were made and they are what they are.  They are quite dense, and easy to parse.  Many messages can be serialized in one pass.  In classic run length encoding style nested structures have their size mentioned in their header.  That makes emitting one pass serialization hard.

Given an array of bytes it wouldn’t be child play to guess that your holding a protocol buffer, you could do it huristically but it would still be a guess.    You need a protocol buffer’s declaration to parse it.  For example you absent the declaration you can’t know if a you’ve got a sint32 or an int64, etc.  All that disappointed me.  It disapointed my inner archivist and the inner peeping tom (who has often debugged tough problems by watching the bytes fly by on the wire).

There is a nice detail that allows optional keys which in turn makes it somewhat easier to add new message variants.  With luck the old message handlers can just ignore the additions.  It made me smile to note that mechanism can be used to pad messages; which in turn makes it more likely that you can serialize in a single pass.

There is another nice detail that allows a key to appear more than once in spite of the metadata saying it is single valued.  The last occurance wins.  This lets you implement a kind of inheritance/defaulting.  For example if your implementing CSS style sheets you read the default style message, and then read the variations from the default, and your ready to go.  They call that merging.

Given the declarative information for a given protocol buffer it’s not hard to convert it to XML or what every else you like.  The observers and archivists will just have to be careful to not loose that metadata; and some of them will no doubt build clever hueristic to cobble together substitute metadata.  Interestingly, inspite of efforts to the contrary, you can’t really work with XML without additional metadata to help.  And, that stuff is horribly complex.

As I like to emphasis what really matters with an exchange standard is how many transactions per second move over it.  No doubt this one has critical mass at least inside the Google cloud.  What matters here for how important this standard might be is how much adoption by non-Google actors.      But, I suspect we will be speaking this dialect more often and for quite a while.  Of course, the rudest way to say that is that we will be chasing their tail lights.  But I don’t really feel that way since I’ve never particularly liked XML and I welcome a substitute.

Upgrade forcers, and DNS

I’m not particularly proud of the neologism “upgrade forcer.”  It encourages a bad behavior.  Product managers can be a desperate lot, particularly when their bonus is riding on how many copies of the upgrade get sold.  When times are good sweet new product features will draw users to upgrade.  But, as products mature the customers grow content and convincing them to upgrade get’s harder.  Having run out of carrots the product managers are tempted to turn to sticks.

Installed bases are hard to move.  Installed bases without a clear owner, or product manager, are even harder move. You can chat up how nice it would be if we all switched to IP6, but nice isn’t must.  It would be nice if my correspondents encrypted their email; but little drives that upgrade.  Effective upgrade drivers engineer a situation where users move quickly to upgrade.    Y2K was an effective upgrade driver, 1999 was a very good year for upgrade revenue.  I’ve a pet theory that the late 90s high tech bubble owns a debt to that.

One of the open many open standards in the Internet menagerie that badly needs an upgrade is DNS.  DNS is an amazing design for it’s time; but one of it’s failings is security.  It has serious design flaws, and numerous vulnerabilities.  For example you ISP, who your probably should not trust, can trivially intercept DNS queries and inject what ever answers he thinks serves his purposes.  The vulnerabilities make that even worse, since at least you can complain, negotiate, even sue if you catch your ISP playing those games.  But if some evil dude poisons one of the DNS servers your happen to use and your email, IM, or bank traffic is intercepted your unlikely to have much recourse.

Security flaws play an interesting role in driving upgrades.  The product manager can use them to threaten the users, while at least nominally not using force himself.

For years people have been attempting to redesign DNS to add better security.  At first blush it seemed straight forward, but it turned out to be way hard.  My sense is that people now think they have the design problem under control.  So the next step is getting the installed base to move.  Getting a possibly immovable installed base to move generally requires an irresistible force.  Some compelling value or something bad.  There is plenty of bad already, though as is usually the case the immovable base finds it easy to avert their gaze from those horrors.

This posting was triggered by a comment in yesterday’s announcement of yet another really bad flaw in DNS.

There is a update to the DNS standard known as secure dns, or DNSSEC that addresses this problem.  But most people see it as nice to have rather than as a must have.

With luck that changed today.  Yesterday the existence of a really really bad flaw in the the DNS protocol was publicly revealed.  The actual flaw’s details were not revealed, but a massive software upgrade to temper the risk is being rolled out.  But, this line in caught my attention.

“DNSSEC is the only definitive solution for this issue.”

So maybe, just maybe, we have found a upgrade forcer for DNS.  This is extremely good news if your a DNS vendor of any kind.  Profit!  For those who are driven by fun, rather than greed, fixing DNS would allow us to use it safely for a much larger range of light weight database functions.

Negative Energy

I have sighted a new urban myth: Electric heating is cheaper than oil heat! Here in Boston people heat with both gas and oil, and the cost per unit of heat between the two has diverged rapidly over the last few years. Those who heat with oil are looking for ways out of their plight. Apparently the rumor making the rounds that it is cheaper to use electric. That’s not true.

In related news Martin  brings my attention to a company EnerNoc that sells negative energy, i.e. load shedding, to the utilities. They use telecom and widgets to shift power consumption from high demand time periods into low demand time periods. Martian’s example is the fridge. You chill when power is plentiful and let it coast when others are paying higher prices.

I assume that EnerNoc’s role in all this is to aggregate small power users into a large enough pool to be worthy of selling to the utilities. It’s a interesting example of a coordination problem. There are of course other ways to approach the problem; ones that are less dependent on a thicket of contracts and ongoing coordination signals controlled by a middleman and enabled, as Martian, points out by the telecom infrastructure.

The obvious alternative is to just broadcast signal; and let the demand side react to the signal by selling some simple technology that responds to the signal in reasonably simple ways. That alone would enable substantial contributions from the demand side. But you can improve the incentive structure either thru regulation or by using statistical sampling to tell which customers have gotten with program; and then reduce their tariffs.

The amount of signal that needs to flow from the grid operators to the consumers is small, in the sense that you can broadcast it. A signal only needs to flow back the other way sufficient to assure that the incentives play out right. It is stupid to presume that the only incentives that are available are monetary or that they need to be executed with fastidious accounting. Most social systems have very fuzzy accounting and they work just fine, thank you!

The puzzle to be solved here is how to draw more of the peripheral demand into a load balancing system. Reading about EnerNoc’s approach isn’t the first time I’ve seen discussion of this. For example Bruce Schneier mentioned a regulatory attempt at something similar. I liked that one a lot, it provided a way to signal household thermostats. He was concerned that the resulting system would attract hackers. I presume he’d be just as sanguine about the security of the EnerNoc system; probably more so since it’s a closed system.

Such concerns are appropriate, but for heaven sakes I wish smart people like Bruce would stop pretending that these cases are somehow unique. It is the very rare large scale system that doesn’t have vunerable choke points. Hubs who’s failure can bring the entire system to it’s knees. Telling designers not to build large systems because of those risks is lame. Helping them know how to build them so they are safe and robust is hard, yes. But these systems get built because they generate mind boggling amounts of value. So it’s better to do the hard job and forgo the short term pleasure of a bit of hysteria.

Speaking of load shedding: turning your car’s engine off when you stop is more efficient than you thought.

de jure standards versus de facto standards

Standards are often a land war; where members of the standards body act to create de facto standards at the same time they participate in the negotiation of de jour standards. The mix varies. In some contexts the land war dominates. But it’s real politics and certainly tastes dirty. Many standards bodies have clauses in their members agreements to prevent other kinds of dirty gaming. For example clauses to temper the member’s ability to playing stick-up with their patent rights. It’s harder to do in these cases; since it would require the member to temper their striving for market share. This plays off interestingly against the device where a standards body retains for it’s members early access to the specification – a device intended to give the members the reward of early mover advantage. This pattern also reminds me of the scheme were a firm creates a “open standard” so other market players chase his tail lights. Card space is a good example of that move.
Meanwhile there is another delightful introductory paragraph in that posting.

Latitude v.s. Longitude

I very much liked this introductory paragraph:

Years ago I read an interesting article about the encyclopedia entry for the keyword “Longitude”. According to the article, the entry merely said “See Latitude”. With that short, two-word sentence the encyclopedia author conflated these two concepts as mere orthogonal dimensions, lumped together, each as boring as the other. This ignored the fact that latitude is boring, easy, trivial, known to the ancients and as easy to calculate as measuring the altitude of Polaris. But longitude, there lies an epic adventure, something fiendishly difficult to calculate accurately, something that propelled a great seafaring nation to a search for accurate timepieces that would work at sea, just in order to more accurately calculate longitude. Books have been written about longitude, lives lost, fortunes made. But latitude — latitude is for children.

Complementary pairs appear through out the world of standards. Often one of these is easier to pull together than the other. After the fact or at if one is only looking casually this difference in cost tends to be forgotten. One of the many places where at first blush two things appear the same, but as you get closer they are not. Delightfully you can actually use this cognitive effect for humor.

A Doctoral Thesis is not a Standards Specification, but…

I’ve greatly enjoyed much of Richard Gabriel’s writing over the years.  Though I’ll admit I haven’t read anything he’s done in the past few year.  In any case I happened to I listened to this interview he gave at OOPSLA to Software Engineering Radio.  The interviewer wanted to learn about this thing, Lisp, and he asks a series of questions to dig into the matter.  While for me this was pretty dull Richard does tell retell a story I’d not heard in recent years.  That got me to thinking about a model of how ideas used to flow from the academic research labs into programming community at large; and in particular how the Lisp community didn’t use standards in quite the same way as other language communities.

Lisp is a great foundation for programming language research.  It is not just easy to create new programming frameworks in Lisp.  The pie chart of where you spend time building systems has a slice for framework architecting and engineering.  Lisp programmers spend a huge portion of thier time in that slice compared to folks working in other languages.  In Lisp this process is language design, where as in other languages it’s forced into libaries.  There is a tendency in other languages for the libraries to be high cost, which makes them more naturally suited for a standardization gauntlet.  In Lisp it’s trivial to create new frameworks and they are less likely to suffer the cost and benefits of becoming standardized.

You get a lot more short term benefit in Lisp, and you pay latter as sweet frameworks fail to survive.  They don’t achieve some level sustianance because they don’t garner a community of users too look after them.

Back in the day this was less of a problem.  And thereby hangs the tail that Richard casually mentioned.  He was sketching out how a pattern that was common during the AI’s early golden age.  Graduate students would aim high, as is their job, and attempt to create a peice of software that would simulate some aspect of intelegence – vision, speech, learning, walking, etc. etc. – what aspect doesn’t really matter.  In service of this they would create a fresh programming language that manifested their hypothisis about how the behavior in question could be manifested.  This was extremely risky work with a very low chance of success.  It’s taken more then fifty years to begin to get traction in all those problems, and back in the day computers were – ah – smaller.

Enticing graduate students into taking huge risks is good, but if you punish them for failing then pretty soon they stop showing up at your door.  So you want to find an escape route.  In the story that Richard sites, and which I’d heard before, the solution was to give them a degree for the framework.

Which was great.  At least for me.  All thru that era I used to entertain myself by reading these doctorial thesis outlining one clever programming framework after another.

What’s facinating is that each of those acted as a substitute for a more formal kind of library standardization.  They filled a role in the Lisp community that standardized libraries played today in more mainstream programming communities.  This worked in part because individual developers could implement these frameworks, in part or if they were in the mood in their entirety, surprisingly quickly.  These AI languages provided a set of what we might call programming patterns today.  Each doctoral thesis sketched out huge amount of detail, but each instance of the ideas found there tended to diverge under the adaptive presure of that developer unique problem.

So while a doctoral thesis isn’t a standards specification it can act, like margarine for butter, as a substitute.  Particularly if the consumers can stomach it.  Lisp programmers like to eat whole frameworks.

Dynamic Standard Setting

Off and on I wonder a bit about how quickly standards can change, and what it would mean if we could change them very quickly.  My usual example for this would be highway speeds.  There isn’t much point in driving 70 mph 15 miles down the highway just to join a 3 mile blockage of stop and go traffic.  The authorities could, presumably signal everybody upstream that it’s in their best interests to drop down to 45 mph.

Of course you can see also the traffic calming ideas, the architecture of control ideas, some of the ideas about calming traffic via the intervention of individual drivers.  Obviously such a systems can be implemented along the lines of libertarian paternalism.

Dynamic standard setting is like dynamic pricing.  IT tech makes it easier to implement.  You could replace all the speed limit signs with electronic signs much as the store I was in the other day had replaced, in the shoe department, all their price labels with electronic ones.  Of course pricing and standards have changed dynamically long before we had IT.  Other stores just put up 30% off signs.  I don’t doubt that if the highway authorities communicated that “southbound travelers on 128 are advised that to practice 30% reduction in speed” much of the benefit could be achieved.
These musings are triggered by an idea the California regulators have floated to do something analogous with the thermostats in new buildings.    The scheme would allow them to signal the buildings to back off on their electricity consumption when the traffic jams occurs in the electricity distribution network.  Lauren Weinstein’s reaction to this suggestion is delightfully over the top.