Category Archives: Uncategorized

Regulatory Stories

It would be fun to accumulate a book of stories about regulation.  My book would be about what a messy complex necessary business this is.

For example, the story of a friend who’s contractor disappeared halfway thru the remodelling job.   When he got another guy to take over the building inspector insisted they remove the dry wall.  The wiring had not been inspected.

Today’s example:  A number of cities started using dry ice to kill rats in their burrows.  It was very cheap and very effective.  Soon, the media reported that the EPA had stepped in to say, “Ah guys?  That’s not an approved pesticide.”  So they stopped.   The media accounts all had this just the facts quality about ’em, but I sensed the underlying narrative was “Yo reader, ain’t regulation lame!”

I noticed story since I use dry-ice when I catch a squirrel trying to eat my house.  It is the recommended technique.

A few days ago New York city started again.  The cities pushed to get the technique approved.  But the story I read had a telling detail.  Apparently what was approved was not dry ice, but rather a product called “Rat Ice” made by some Bell Labs.

Which raises the question in my mind.  Who complained to the regulators?  In New York their original trial run was a park where the poured the dry ice into 60 burrows; so maybe the park’s users complained that the entire park was smoking.

A cynical observer would quickly guess that the rat poison vendors complained.

The Bell Labs is not the famous research laboratory in New Jersey.  Nah, it’s a firm that sells classic rat poisons, baits, and traps all over the planet.  They even have a registered trademark tag line: “The World Leader in Rodent Control Technology®”.  They haven’t gotten around to marketing Rat Ice on their web site.

To me the proof of this is this bit from an article from USA Today that appeared back when the flurry of media reports about how the EPA was telling the cities to stop using dry ice:

Ruth Kerzee, executive director of the Midwest Pesticide Action Center, said her organization raised concerns with regional EPA officials and the city of Chicago about the new rat-killing method.

Kerzee, whose organization promotes minimizing the use of pesticides, said while dry ice is less toxic than some conventional pesticides it remains unclear what, if any, guidelines cities created to ensure the product is being safely handled by personnel.

“We think it could be a sea changer, a great thing to be able to use, but it does need to be vetted and go through the process, so that we don’t end up in a situation where we throw the baby out with the bathwater,” Kerzee said.

The National Pest Management Association, a trade group representing private pest control companies, also inquired with EPA and the Illinois Department of Public Health about the use of dry ice after Chicago launched its pilot and was told it could not be legally used as rodenticide, said Jim Fredericks, chief entomologist for the association. The group published a message to members in its newsletter last month that “any use of CO2/dry ice to control rodents would be a violation of federal law.”

Fredericks said the industry association is not calling for the EPA to permit dry ice as a rodenticide. “It’s not one of our priorities right now,” he said.

There is a joke to be made here about inventing the better mousetrap and “It would be a shame if some innovation where to upset that nice business you have there.”

Mastodon

Yet another attempt to create a social network.  This one’s called Mastodon.  It is analogous to Twitter, i.e. short status updates with following, liking, comments.    Web UI, and apps for assorted devices.   It’s usenet like in with a user accounts residing on nodes and then the nodes stitched together into an exchange network.  Open source with ties to the FSF/Gnu community.

We wish them the best of luck, this is hard rabbit to pull out the damn hat.

Here are some charts based on data taken from this page enumerating some of the nodes in the network.  These are log log charts, and each point is for a single node.  Their equivalent of Twiter’s tweet is being called a toot.  Though in these charts it’s called a status.

A not unusual distribution for an unregulated social networks.  It’s always delightful make up little stories about why there is a node who’s users have made an huge number of toots per user.

Democracy for Realists – space and time

Some more about this book, Democracy for Realists: …

If we accept that voters do not vote for their policy preferences (and you can read the book if you want to see the evidence) then what is driving their voting behavior.

Here are two models that Political Scientists have put forward – space and time.  Both model presume that voters, being humans, lack the time or talent to engage in a very subtle or complex analysis of what to do with their vote; so they simplify things.  They approximate.

The spacial model: all of politics is boiled down to some simple metric: left-wing v.s. right-wing say.  Or maybe a few two,  like both an economic and social variant of left/right.  The voter then “merely” asks the question how close are these candidates’ metrics to my personal metrics. He then votes for the one closest to him.

In the time based model the voter need only look at his personal experience over time.  He then aligns that with who ever is change during various time frames and votes for the candidates that deliver better outcomes.  It’s feedback loop, and presumably the statistics of large numbers of voters might make this work out nicely.

Again this is Science.  A theory is only interesting if we can proceed to try and disprove it.

The spacial theory is easy to disprove.  You just ask.   Compare the voter and the candidate he selects on the metric.   Questionnaires can dependably tease out where they are on the scales.  For example: Support for lower taxes verse more government services? What the data shows is only the lightest correlation.  In fact in some cases voters do the opposite of what they prefer.  So this theory isn’t helping us.

The problem with the time based theory is two fold.

The first problem: The usual ones found in feedback based systems.  These systems only work if (a) the signal the feedback is based on is accurate and (b) the feedback’s timing is adjusted correctly.  In Engineering school I spent a few years learning how to get that right for simple electronic systems like amplifiers.  In that context if you get it wrong you get nothing or horrific feedback noise.  Big social systems are even harder.  So first off voters get a signal (they lose their job, the weather is lousy, the crop fails, the town has an awesome fair,  the kid gets a lovely teacher) and they sum that up and vote for against the current candidate.   Then we have timing.  This model rewards the politicians for taking actions that have short term benefits; i.e. they show up in the voter’s impression before the next election.   Worse, long term benefits will accrue to the account of some other guy.

Like the spacial model voters have a very noisy model of the candidates.  In this case the their model of credit/blame is very poor.

So what are two models worth anything?  Turns out yes.

The spacial model is the gold standard for understanding legislatures.  While it’s useless for discovering how a voter will pick his candidate, it useful for predicting how Bob, your legislator, will choose to vote on any given bill.   This is good news:  Bob is fairly well informed about the position taken by the bill.  On the other the voters who elected Bob do not have a good model of Bob.

The time based model is actually quite predictive of how voters will behave.   But, oh my, they are largely miss informed about blame/credit and their sample is narrow minded.  They only look back a few months.  This is not good if you want responsive government.   It is useful if your placing bets on an election.  You can do a damn good job of predicting the outcome of elections by measuring just GDP growth over the last few months.

While these models are not as useless as the folklore model ( i.e. that voters give their votes to candidates who reflect their personal policy preferences).   But if your goal is to explain how Democratic governance is responsive to the voters preferences; they they aren’t going to help you.

More to follow…

Democracy for Realists – acting on falacies

Part 2 – So let’s step into this book a bit.

The reason to prefer a realistic view of politics is fear.  Fear that your unrealistic premises will lead to unfortunate outcomes.  So political scientists have spun up models for voter behavior. And then, tested them!  If you want to win elections it’s probably best to pay attention.

Personally my thinking about politics was entirely up-ended by the work on the voting patterns in Congress.   This book may be forcing a major resorting in my head.  I’m not sure how that will settle out.   It’s very discomforting to think that the model I took on board from that book might be wrong, that I’ve been extremely deluded.

Books that are attempting to force a painful dose of realism into their audience probably need to spend a lot of time addressing their audience’s bogus beliefs.  Scientists to this with studies, data, statistics.  It takes years to convince people that the world is not flat, the sun doesn’t spin around us, that punishment is effective, that bleeding out the bad blood doesn’t help.

So let’s start with the most popular model of how democracy works.  It’s widely presumed that voters vote their preferences.  Say Sam is extremely concerned about Global Warming.  We’d assume he’d seek out the candidate who is most aligned with his concerns and then vote for him.   What the data say?  The data says:  NO!

If you take that to heart you really need to stop taking seriously sentence like:  “The voters, outraged about X, voted for Mr. P.”  Because it’s not true!  Talk of the “will of the people” is aspirational, but it too is not true.  The whole idea of a mandate splits through your fingers like sand.

Good science is all about disconfirming models   Postulate a theory/model and then see if you can prove it’s wrong.   The audience may hate that, they may love the model, but science doesn’t care.

So this first model of politics in the democratic states is wrong.  The authors call this the folklore theory.

Once it became clear that the folklore theory doesn’t fit the data the political scientists went looking for other theories.  But that’s a story for another day.

<X> for Realists

I’m reading “Democracy for Realists: …”   It has triggered a bemused fantasy about a series of “… for Realists: …” books.  In the tradition of those “… for Idiots …”.

Books stores have lots of shelf space for self-help books.  It’s a popular genre.

Let’s imagine some titles:   Schooling for Realists, Vacations for Realists, Project management for Realists, Home brewing for Realists, Gardening for Realists.

So’s why not?  I have my theories.  For example picking up a book of this title would seem to signal one’s appetite for disconfirmation.  Where’s the fun in that?  Or possible like the Monty Python argument skit it implies your shopping for a scolding or abuse.  At a minimum it would seem to signal that the author is war weary, scarred, old and cranky?

One take on self-help books is that they are selling a treatment for stress.   Realism doesn’t sound like a miracle cure, more like chemo.

An Argument for Centralized Systems

Open systems have their good points and their bad.   Their weak governance makes it hard, or impossible, to move the installed base.   The communities around an open system are more likely to evaporate that reengineer.   They can only make slow evolutionary changes, so instead one by one they switch to revolutionary alternatives .

HTTP or JavaScript are fine examples of this.  Both, once adopted widely, it has taken Herculean efforts by very large players to shift the dial.  That only happened because the installed base was so locked in.

I’m reminded of this by an essay by Moxie Marlinspike.  It’s a fine example of how a blog let’s you give voice to the spirit of the stairwell.   Somebody provoked him.  And it appears to have taken him a while to pull together his response.  That guy said:

“that’s dumb, how far would the internet have gotten without interoperable protocols defined by 3rd parties?”

At first blush that seems pretty freaking obvious.  We have a boat load of stories we tell about why open protocols are potent.  Some examples.  Open systems help to commoditize things, enabling those that stand on them to thrive; i.e. they help limit the power of the platform vendor to tax all the air we breath.  Open systems solve a search problem, i.e. what is this good for; no platform vendor can possibly know the answer that question because only end users can comprehend their problems.

But yeah, I’ve a long have a list of these arguments/models about what open systems are about.  Moxie isn’t arguing that side of the question.  The Open Systems tribe tell stories and other tribes tell other stories.  Moxie is trying to tell one.

 

Moxie has few arguments in his essay.  For example he argues that the classic open protocol examples of Internet mythology all bloomed decades ago and have since resisted much, if any evolution.  SMTP for example.   That’s fair, and it’s not.   One counter-point to that argument is that these protocol evolved fast as the problem they solved was discovered and they are good enough.  The switching costs v.s. the benefits of switching became such that we can and in fact ought to bear those costs rather than switch that even a dictator wouldn’t bother.  My point isn’t to say that’s the case, only that it’s would be work to be sure one way or another.  Another counter point is that to say, no those protocols have not stagnated.  That we have layered on lots and lots of technology that extend and address new problems as they became apparent.  A glance as the number of headers in a typical email gives a glimpse of that for SMTP.  SMTP is still a damn good default choice if you need a robust distributed low latency messaging system.

Moxie argues that if you have an open protocol you are going to have a hell of a time getting the client side software to deliver a consistent experience to your installed base.  Well yeah. That’s why for decades Microsoft’s embrace and extend tactics make it so damn frustrating to use email.  And many argued, and often insisted, that the solution to that frustration was to that we should all just get on board the train to Seattle.   Google’s extensions clever use of IMAP and Jabber are more modern, though possible less conscious, examples of the same pattern.

But Moxies core argument, it seems to me, is that we haven’t the time.  That democratic (sic) open systems aren’t able to meet the expectations of the industry we are now in.

That deserves more thought.  It is certainly the case that they don’t meet the needs of the VC, product managers too.  The open system processes frustrate individual developers – the consensus building requires skills they despise; they’d rather be coding.  The whole enterprise smells like politics, because – well duh – all consensus build is.  For 90% of users they don’t care any more than 98% of your co-workers cared that Microsoft Exchanges is/was a closed system.  These issues are below their radar, below the facade of the “product” where they never go.  Making that case is like activating voters, again it’s politics.

To my eye Moxie’s essay is part and parcel of the swing back toward centralized computing.     Maybe it’s a pendulum, maybe it’s a one-way street.  Either way I suspect only 10-20% of the way along the way.

The personal computer was the primary artifact the tribe of decentralized computing gathered around.  We have a lot of stories that tell about why it’s awesome.  The new tribe, for whom AWS is the principal totem, will tell their own stories.  Moxie’s essay is an example.

Let’s Encrypt Everything

I renewed the SSL/TSL certificate on one of my little cloud servers over the weekend.  I had been using StartSSL for this.  This time I decided to try out the services of Let’s Encrypt Everything, which worked out nicely.

You can read their website for the background story.  This posting is about the details of how I proceeded.

Let’s Encrypt Everything will sign TLS certificates for your website.  It uses a scheme called ACME.  That scheme involves running some software on your end that talks to their servers.  During that conversation a transient page is created on your website, this is used to prove that you control the site.  That proof of control how they validate that you control the site and thus it’s ok for them to sign off on the cert.

What’s nice about this scheme is that you really don’t need to know much, if anything, about how all this works.  You only need to install some software on your machine – the ACME client – and then follow the instructions.  The better the ACME client the less work you need to do.  This posting has a nice review of various ACME clients.

I first tried the client that the Let’s Encrypt folks are working on.  It didn’t work well for me.  I then moved on to acme-tiny and it was great; though it certainly required more hand work.

The proof of control step/scheme requires that you let the ACME client add a page to your web site, i.e. put a file into your sites http files.  That page is served using HTTP, not HTTPS.

The certificate they give expires in three months, so they presume your likely to run a crontab to renew the certificate, montly say.

The largest hick-up I ran into was that the page wants to be served via HTTP.  My site is setup to to immediately redirect all HTTP traffic to HTTPS.  So I had to adjust the configuration to leave a small hole in that behavior just for the proof of control page.  I do the redirects with Apache’s mod_alias; and it required a bit-o-thought to get that hole build.  I now redirect all URL’s, except those that begin with a period, it’s lame but it works and was easy.

Normalization of Deviance

I’ve found it interesting to think about a posting from Bruce Schneier over the last few days.

He’s musing about the term “Normalization of Deviance.”  This term’s home is in public health, and it’s used to describe a syndrome where the profession knows that certain practices are key to assuring safe outcomes; but where they have a difficult and frustrating time keeping the parties involved on board with those practices.

Bruce is musing about how some large swath of the software industries security failures can be viewed that way.   Clearly in many cases we know what to do, and thus the problem comes down to how difficult and frustrating it is to make that happen.

Some communities of practice (medicine, civil engineering, aviation, …) reside in (mature?) straight jacket of practice.  He kicks off that post with a link to a horrific story of pilots failing to conform to required practice.

Bruce links to this rant,  who’s author is confident that small software startups can, should, ought-to live in that straight jacket too.  That’s a conclusion that is at odds with the buckshot model of startups.  An interesting tension that.

I see I’ve touched on this issue in the past, it’s a fascinating subplot of all this how the straightjacket of regulated practice is analogous to the Overton Window.  The average velocity of the overton window varies widely from one field to another.  There is some sort of relationship between that and safety, but damn if I can say what with the precision I’d like.

Decades ago I had an argument with a young Professor at CMU.  I was right, for various reasons [1, 2] software engineering was not going emerge a “professional engineering” practice in the manner of older engineering fields.   What is clear now is that security issues, like the ones Bruce works on in his day job, are rapidly building out a very similar straightjacket of engineering practice.