Dice

I think the #1 thing i’m embarrassed about is that I didn’t take seriously the one in three chance that the best pollsters gave Trump of winning.   As John Hobo wrote: “I’ve never played Russian roulette – don’t intend to – but I think I know enough of tabletop games to know that sometimes a six-sided die comes up 6.”

So I really didn’t have a contingency plan; still don’t.  I’d chatted about hedging.  I.e. placing a largish bet that Trump would win, so then at least I’d have some winnings – either way.  But the consensus was that it’s difficult to hedge against an existential threat.

Back around the turn of the century I read “Congress: A Political-Economic History of Roll Call Voting.”   Which revealed the shocking trend in polarization.  Back then it was all on the Right.  Still is, to first order.

So, I came to form opinions about how that was likely to unfold over time.  Models of possible destinations.  For example the last time this happened we got the Civil War.

My best case scenario was (maybe still is) that the party of the right would implode; go insane.  That the voters would look at that and run away.  The George W. Bush administration gave some confirmation to that hope.   But, also a taste of what a terrifying journey that would be.

What I didn’t know until recently is that that political scientists tend to think about voter behavior and preferences.   For example, voter preferences flow from the party to the voters, mostly.  Not the other way around.  It’s unsurprising when you think about it.  How is the typical person to form an opinion about complex issues of governance except to turn to those around them.

It’s not as simple as to say the consensus of the party members flows top down.  It’s a social network thing.  But for a party  member to step away from the consensus accepting a huge about of collateral damage.  He has shred his entire social network.

 

Narwhale distribution

Amusing neologism: the narwhale distribution.  i.e. the statistical distribution with a horn.

6a00d8341c5e1453ef01b7c87f0c0f970b-800wi

That’s taken from a lovely essay in “Small Things Considered” about how some viruses operate in two modes.  In the mode your probably familiar with they infect the cell, repurpose things to reproduce, and the blow up the cell to go in search of more prey.  But in another mode they are more like a parasite.  It seems that when in this second mode they can provide a benefit to the cell, i.e. a defense against the first mode.  Oh nature!

An Argument for Centralized Systems

Open systems have their good points and their bad.   Their weak governance makes it hard, or impossible, to move the installed base.   The communities around an open system are more likely to evaporate that reengineer.   They can only make slow evolutionary changes, so instead one by one they switch to revolutionary alternatives .

HTTP or JavaScript are fine examples of this.  Both, once adopted widely, it has taken Herculean efforts by very large players to shift the dial.  That only happened because the installed base was so locked in.

I’m reminded of this by an essay by Moxie Marlinspike.  It’s a fine example of how a blog let’s you give voice to the spirit of the stairwell.   Somebody provoked him.  And it appears to have taken him a while to pull together his response.  That guy said:

“that’s dumb, how far would the internet have gotten without interoperable protocols defined by 3rd parties?”

At first blush that seems pretty freaking obvious.  We have a boat load of stories we tell about why open protocols are potent.  Some examples.  Open systems help to commoditize things, enabling those that stand on them to thrive; i.e. they help limit the power of the platform vendor to tax all the air we breath.  Open systems solve a search problem, i.e. what is this good for; no platform vendor can possibly know the answer that question because only end users can comprehend their problems.

But yeah, I’ve a long have a list of these arguments/models about what open systems are about.  Moxie isn’t arguing that side of the question.  The Open Systems tribe tell stories and other tribes tell other stories.  Moxie is trying to tell one.

 

Moxie has few arguments in his essay.  For example he argues that the classic open protocol examples of Internet mythology all bloomed decades ago and have since resisted much, if any evolution.  SMTP for example.   That’s fair, and it’s not.   One counter-point to that argument is that these protocol evolved fast as the problem they solved was discovered and they are good enough.  The switching costs v.s. the benefits of switching became such that we can and in fact ought to bear those costs rather than switch that even a dictator wouldn’t bother.  My point isn’t to say that’s the case, only that it’s would be work to be sure one way or another.  Another counter point is that to say, no those protocols have not stagnated.  That we have layered on lots and lots of technology that extend and address new problems as they became apparent.  A glance as the number of headers in a typical email gives a glimpse of that for SMTP.  SMTP is still a damn good default choice if you need a robust distributed low latency messaging system.

Moxie argues that if you have an open protocol you are going to have a hell of a time getting the client side software to deliver a consistent experience to your installed base.  Well yeah. That’s why for decades Microsoft’s embrace and extend tactics make it so damn frustrating to use email.  And many argued, and often insisted, that the solution to that frustration was to that we should all just get on board the train to Seattle.   Google’s extensions clever use of IMAP and Jabber are more modern, though possible less conscious, examples of the same pattern.

But Moxies core argument, it seems to me, is that we haven’t the time.  That democratic (sic) open systems aren’t able to meet the expectations of the industry we are now in.

That deserves more thought.  It is certainly the case that they don’t meet the needs of the VC, product managers too.  The open system processes frustrate individual developers – the consensus building requires skills they despise; they’d rather be coding.  The whole enterprise smells like politics, because – well duh – all consensus build is.  For 90% of users they don’t care any more than 98% of your co-workers cared that Microsoft Exchanges is/was a closed system.  These issues are below their radar, below the facade of the “product” where they never go.  Making that case is like activating voters, again it’s politics.

To my eye Moxie’s essay is part and parcel of the swing back toward centralized computing.     Maybe it’s a pendulum, maybe it’s a one-way street.  Either way I suspect only 10-20% of the way along the way.

The personal computer was the primary artifact the tribe of decentralized computing gathered around.  We have a lot of stories that tell about why it’s awesome.  The new tribe, for whom AWS is the principal totem, will tell their own stories.  Moxie’s essay is an example.

Never Expires

Given this raw material there is something to be said here.  But I can’t quite pull it together.

Something about how coupons are are a way to overcome the buyer’s impulse control?

Something about how no market is immune to discriminatory pricing?

This may well be the most evil thing I’ve yet encountered in my hobby around pricing games and shaping consumer behavior.

… Valeant’s business model.

They bought an out-of-patent drug (Sodium Seconal) which is used in physician assisted suicide – and after the California government passed laws to make the above legal they jacked the price up to $3000. … consistent with Valeant’s business model there is a copay coupon so that you, dear patient, are not out of pocket, whilst your insurance provider takes the hit.

via Bronte Capital.

Let’s Encrypt Everything

I renewed the SSL/TSL certificate on one of my little cloud servers over the weekend.  I had been using StartSSL for this.  This time I decided to try out the services of Let’s Encrypt Everything, which worked out nicely.

You can read their website for the background story.  This posting is about the details of how I proceeded.

Let’s Encrypt Everything will sign TLS certificates for your website.  It uses a scheme called ACME.  That scheme involves running some software on your end that talks to their servers.  During that conversation a transient page is created on your website, this is used to prove that you control the site.  That proof of control how they validate that you control the site and thus it’s ok for them to sign off on the cert.

What’s nice about this scheme is that you really don’t need to know much, if anything, about how all this works.  You only need to install some software on your machine – the ACME client – and then follow the instructions.  The better the ACME client the less work you need to do.  This posting has a nice review of various ACME clients.

I first tried the client that the Let’s Encrypt folks are working on.  It didn’t work well for me.  I then moved on to acme-tiny and it was great; though it certainly required more hand work.

The proof of control step/scheme requires that you let the ACME client add a page to your web site, i.e. put a file into your sites http files.  That page is served using HTTP, not HTTPS.

The certificate they give expires in three months, so they presume your likely to run a crontab to renew the certificate, montly say.

The largest hick-up I ran into was that the page wants to be served via HTTP.  My site is setup to to immediately redirect all HTTP traffic to HTTPS.  So I had to adjust the configuration to leave a small hole in that behavior just for the proof of control page.  I do the redirects with Apache’s mod_alias; and it required a bit-o-thought to get that hole build.  I now redirect all URL’s, except those that begin with a period, it’s lame but it works and was easy.

Tarsnap Notes

I set up tarsnap to backup one of my small small cloud servers.   Some notes on the hick-ups:

  1. Tarsnap’s install involves compiling it – that tells you about the overall tone :).  The compile requires this include file: “ext2fs/ext2_fs.h”.  My little server lacked that.  It took a while to find how to get it.  In this case the answer was: yum install e2fsprogs-devel
  2. There are two keys.  One is used to access your account on his server.  The second is used to encrypt etc. your backups.  I was puzzled about this file since I’d assumed it would encrypt the backups with one key (which would be installed on the machine(s) you backing up), and then a second key (the private key) would be used to decrypt them later.  Turns out the behavior is – sort of – optional.  The 2nd key you get fills both roles, and you need to use the key management tool if you want to make this distinction.

Normalization of Deviance

I’ve found it interesting to think about a posting from Bruce Schneier over the last few days.

He’s musing about the term “Normalization of Deviance.”  This term’s home is in public health, and it’s used to describe a syndrome where the profession knows that certain practices are key to assuring safe outcomes; but where they have a difficult and frustrating time keeping the parties involved on board with those practices.

Bruce is musing about how some large swath of the software industries security failures can be viewed that way.   Clearly in many cases we know what to do, and thus the problem comes down to how difficult and frustrating it is to make that happen.

Some communities of practice (medicine, civil engineering, aviation, …) reside in (mature?) straight jacket of practice.  He kicks off that post with a link to a horrific story of pilots failing to conform to required practice.

Bruce links to this rant,  who’s author is confident that small software startups can, should, ought-to live in that straight jacket too.  That’s a conclusion that is at odds with the buckshot model of startups.  An interesting tension that.

I see I’ve touched on this issue in the past, it’s a fascinating subplot of all this how the straightjacket of regulated practice is analogous to the Overton Window.  The average velocity of the overton window varies widely from one field to another.  There is some sort of relationship between that and safety, but damn if I can say what with the precision I’d like.

Decades ago I had an argument with a young Professor at CMU.  I was right, for various reasons [1, 2] software engineering was not going emerge a “professional engineering” practice in the manner of older engineering fields.   What is clear now is that security issues, like the ones Bruce works on in his day job, are rapidly building out a very similar straightjacket of engineering practice.

Process Shock

I’m very interested in questions of scale, so Ben Adida‘s “Important read” click bait had an easy time getting me to click through to  “Orders of Magnitude“. But, let me save you a click.

FYI – HR is very different at Google with 8! orders of magnitude more employees than it is at a startup.

He actually wrote “Important read! For bigco engineers who join startups, eng processes also are very different at diff scales.”   So he had me twice hooked, I’m thinking a lot about process these days, as one does.

From the employee/HR point of view: moving from one firm to another, like any move, is all about encountering, digesting, introducing new conventions.   The resulting culture shock is always part of the work.  For both sides.  This emotional work is huge.

Management, on the other hand?   Well, their brief includes moving the immovable culture.  The real work of HR is keeping the collective culture shock in some sort of Goldilocks zone.