Monthly Archives: April 2014

Early Advantage

Founder reports successful IPO to his VCs.

Founder reports successful IPO to his VCs.

Each year in Boston we have a road race, a marathon.  All I knew about marathons before I moved here I learned in grade school.  It’s bad news: you die at the end.   But in compensation you get remembered as a mythic hero.   Now that I live here I picked up some random knowledge.  For example it is important not to sprint out into an early lead early.   You gotta pace your self.   Hearing that again this year I thought: yeah! that’s not the advice we give to startups.

Instead we counsel that you gotta get momentum early.  I recall that Steve Jobs advised the Segway team that they needed a huge PR campaign to launch if they ever hoped to trigger the kind of change they imagined.  They didn’t take he advice, and they didn’t trigger the change.  So, see!

Information cascades are one of the many processes that generates power-law distributions. Kieran Healy reports on some fun research into this.  The researchers hacked the early days of a random sample activities at Kickstarter, Epinions, Wikipedia, and Change.org. They blessed some projects with in a small way and then waited to see if how much it helped.  It helped.

Kieran is a professional and his commentary is very astute.  Of course, if you’re interested in hacking, helping, defending systems like these then it’s very nice to have some experiments reported publicly.

Apparently this year’s Marathon winner did take an early lead.  There is a touching story about collective action that possibly explains why that worked for him.  One would assume a marathon is the archetype of an activity immune to collective action, but you’d be wrong.

Touch Labor

Reading about how military spending isn’t particularly good for the economy found this sentence: “Most weapons projects require relatively little touch labor.”  That’s a nice category: “touch labor.”

Touch labor isn’t a widely used term.  In accounting, I gather, labor is sometimes partitioned into direct and indirect.  Managers and janitors are indirect and assembly line workers are direct.

It’s a bad thing? “Deploying and managing IT requires a huge amount of touch labor.”  Or, maybe it’s a good thing?  “Limiting meetings increases the time spent on direct labor.”

As usual I love list of categories: “skilled labor,” “guard labor,” “undocumented labor,” …

Reminds me of the current fad for “makers.”

Spider Chart of Unemployment Statistics

This is an impressive bit of chart from the Atlanta Fed.  It shows a large handful

 

It shows three samples (March 2012, 2013, and 2014) of thirteen metrics for unemployment.  The designers scaled these various numbers by using two reference points: December of 2007 and 2009.  I.e. just before and after the great recession started.  The reference years appear on the chart as two circles: the inner circle is the bad year 2009, and the outer green circle is the good one.

You can see how over the last three years things are getting better, but not much.  The 13 metrics are in four rough categories as indicated by the labels in the corners.  The metrics that tend to suggest the future have improved the most.  The facts on the ground, i.e. while, aka utilization, have not improved much..

The scale selected is good, but it’s worth pointing out that 2007 was fairly disappointing.  Consider this next chart.

That shows the U-6 unemployment rate. The recession that followed the bursting of the Internet bubble is the gray band.  So in 2007 the concern was how we seemed unable to achieve utilizations close to those of the 1990s.  Shortly after this chart ends the U-6 rose to 17%+.

Another critique, I suspect payroll number (and others?)  are not adjusted for population growth.

 

Blackmailed to Markup

The web works because it let’s you reach other people.  That contact is the motive force behind the entire net.  All the other drivers are complements, competitors, and parasites.

Which brings me to the semantic web.  The semantic web at best off by one, and at worse it’s entirely wandering off in the wrong direction.   Where web pages are crafted by people to titillate other people – so it’s no wonder they show up – the semantic web exists to empower machines to excite other machines.  It doesn’t make sense, so people don’t show up.  Well at least not much.

According to the study reported here 1% of all the web has a few crumbs of semantic markup.  In an industry where we expect ideas to start bonfires the best we can say for this idea is that it endures.

But on the other hand.

Maybe this is going to change.  And yeah, I’ve and others have said that before.  But the web isn’t the same as it once was.  We are long past the explosive phase transitioning growth and deep into the consolidation.   It’s less about people now, and more about the machines (though we call it big-data now).  There’s an entire industry (aka SEO), and has been for a while, that labors to make the web pages attractive to the machines rather than people.

So it is with interest that I read that report.  Which suggests that Google is starting to bless pages with semantic mark-up with better placement in search results.  That would be a classic standardization move on Google’s part.  If you have market power you can create incentives (force) suppliers to conform to standards in service of your quality/cost metrics.

I’ve predicted this for a while, and mostly I’ve been wrong – since it keeps not happening.  Maybe I’ll finally be right.

Curiously I’d not noticed before a perversity in my presumption that the only way that the semantic web can succeed is if the big machines force the issue.   And that’s that the open world model so beloved by semantic web fans (me included) is totally at odds with this driver.

 

Piketty #4: Escape from Groupthink

The important point is mainstream economics has difficulty acknowledging work from such sources because to acknowledge is to legitimize. That creates the strange situation in economics whereby something is not thought or known until the right person says it.”

Isn’t that true in any tribe?  It’s not obvious to me how to distinguish, on a day to day basis, when it’s a bad or a good thing.  Though, the list drawn from Group Think isn’t a bad start.

Thomas Palley is suggesting that the economics profession is undergoing a kind of phase change.  That it is comes to grips with the realization that it’s been making many of the mistakes enumerated on that list: Illusion of invulnerability, collective efforts to rationalize, absence of questioning, belief in the the group’s inherent morality, stereotyped views of enemy leaders, direct pressure on any member …, self-censorship, illusion of unanimity, and self-appointed mindguards.

I’m surprised that I’m not aware of any literature, say a cookbook, on how groups escape from the group think.  It’s almost the definition of a group is that it exists to maintain focus; and so the best it can do is drift toward a different focus.  Of course the MBA solution to this problem is leaders, layoffs, reorganizations, and manipulation of incentives – all of which are crude.  And the high-tech version of this is particularly brutal – we let the old firms wither and create new firms from scratch.

I have observed situations where a group slowly loses it’s grip on the consensus delusion.  It only keeps going thru the motions.  The self censorship and mind guards continue to do their work, but it becomes more and more half hearted.  In that context when the layoffs come the level of outrage is tempered.  The group members are then envious of those who jumped ship before the boat’s leaks became so apparent.

emac’s simple-httpd and impatient mode

I found another JavaScript CLI scheme. Skewer which I’ll get around trying sooner or later.  But I got distracted by some adjacent work. Skewer’s scheme for connecting Emac to the java interpreter so it can do remote evaluation etc is to infect the java interpreter with a bit of code that then connects into Emacs using HTTP long polling. Which is good because it works with all the browsers, and is bad because … well actually it’s pretty good. Obviously, that requires that we have a working http server inside of Emacs.

Unsurprisingly people have written http servers in emacs-lisp, one of these is simple-httpd.  It is easily installed from the Melba package repository.

So, what might you do with such a thing? I already mentioned skewer, and I see that somebody wrote a web UI for his emac’s RSS reader.   Somebody else wrote air play server. Those examples are suggestive of what other emacs apps (calc, gnus, magit, erc, etc. etc.) might do. Why? Because they can? I don’t know.

My favorite is impatient-mode, which spontaneously update a web page as you edit your buffer of, typically, html. There’s a video:

You can add filters to the refresh pipeline. So if you want your org-mode files displayed you might refine the filtering scheme. I did this, but I doubt this is the best approach.


(defun imp-htmlize-filter (buffer)
  "Alternate htmlization of BUFFER before sending to clients."
  ;; leave the result in the current-buffer
  (let ((m (with-current-buffer buffer major-mode)))
    (case m
      (org-mode
       (let ((output (current-buffer)))
         (with-current-buffer buffer
           (org-export-as-html 100 nil output))))
      (t
       (let ((html-buffer (save-match-data (htmlize-buffer buffer))))
         (insert-buffer-substring html-buffer)
         (kill-buffer html-buffer))))))

Other things (graphviz?) would be fun too.

emacs, node, javascript, oh-my

Each time I turn my attention to using JavaScript I’m a bit taken aback by how tangled the Emacs tooling is. So here are some random points I discovered along the way.

Of course there is a very nice debugger built into Chrome, and that does a lot to undermine the incentives to build something else in Emacs. I only recently discovered there is a more powerful version of that debugger.

Safari and Chrome, because they have Webkit in common, can be asked on start-up to provide “Remote Webkit Debug” connections. You invoke ’em with a switch, and they then listen (i.e. open -a 'Google Chrome' --args --remote-debugging-port=9222). Bear in mind that this Debug protocol is quite invasive, i.e. it’s security risk. Having done that it’s fun, educational, and trivial to look at the inside of your browser session, just visit . Tools that use this protocol use the json variant rooted at .

One thing that makes the emacs tools for working on JavaScript such a mess is that there are far too many ways to talk to the java instances, and then there are multiple attempts to use each of those. So there are two schemes that try to use remote WebKit debug pathway: Kite, and jss (also known as jsSlime).  I’ve played with both, and have them installed, but I don’t use them much. Both of these are useful in their own ways, I developed a slight preference for jss which has a pretty nice way to inspect objects.   Though I’m on the look out for a good Emacs based JavaScript object inspector.

There is a delightful video from Emacs rocks explaining yet another scheme for interacting with JavaScript from Emacs using swank-js.   What’s shown in that video is a wondrous and a bit mysterious.   The mysterious bit is that it doesn’t actually make it clear what the plumbing looks like.  I’ll explain.

Slime is a venerable Emacs extension originally developed to interact with Common Lisp processes.  It does that via a protocol called swank.  Which means that unlike, for example, emacs shell mode there is a real protocol.  The sweet thing about slime/swank is that it provides a wide window into the process enabling all kinds of desirable things, at least for Common lisp, like:  inspecting object, redefining single functions, debugging, thread management, etc. etc.  In the video you can he’s managed to get a swank like connection into a browser tab and this lets him define functions and dynamically tinker with the tab.

The plumbing is wonderfully messy.  A node.js process acts as an intermediary.  Bridging between swank (for the benefit of Emacs) and a web socket based debugging protocol for that hooks into the browser.  I assume that web socket protocol is similar, if not identical, to the remote WebKit debug protocol.  In the video the Emacs command M-x slime-jack-into-browser establishes the pipeline, and reading that code is enlightening.

A consequence of that design is that the resulting emac’s slime buffer is actually interacting with two processes.  A node.js process and the JavaScript in the browser tab.  You can switch between these.  I find that goes wrong sometimes, and it took me a while to discover the slime command (slime command start with a comma) “,sticky-select-remote”.  If you hit tab it will list the things you might talk too.

The swank-js github instructions are pretty good.  And they explain how to uses swank-js with node.js – though that assumes your reasonably comfortable with node.js already.  I don’t actually follow those instructions.  Instead, after including swank-js in my projects’ dependencies, as the instructions suggest, I require('swank-js') in my main module (only when in a development mode of course).   It’s worth noting that when you then slime-connect to your node.js program you’ll be in the global object.  Your actual program (usually found in the file server.js) will have been wrapped up in a function and hence it’s locals variables are invisible to you.  I work around that by putting interesting state into an object and then do something like global.interesting = interesting.

Recall the remote WebKit debug protocol?  There is a clone of that for node.js known as node-inspector.  Use it!

I have yet to try two other Emacs/JavaScript interaction packages.  slime-proxy and skewer.

If you don’t use slime already you might be able to install these using the usual Emacs package repositories.  I ran into problems with that because I use a very fresh slime/swank and the Emacs package system wanted to bless me with older variants.

Hope this helps.

 

Piketty #2: Meritocracy – Don’t I get a Badge?

An old video for Google’s Glasses:

Broadly speaking there are two variants of meritocracy.  What it’s not: i.e. it ain’t inherited wealth.  What it is: achievement.  We sneer at one and admire the other.  The admirable one is, presumably based on skilled application of one’s talents.

One term of art for talents is knowledge capital.  I’m not sure how that term came into being.  A suspicious soul might think it was a way of distracting those without real capital into focusing on something else.  Since yeah, knowledge shares little in common with capital.  It’s a public good for heaven’s sake!  Largely excludable and non-rival; the opposite of capital. Further, capital is easier to horde, transfer, diversify, steal, measure.

One of the points Piketty likes to highlight is a distinction between in today’s discussion inequality and that seen in the late19th century.  Today we have taught the poor that it’s their fault.  They lack merit.   That’s kind of cruel.

But it hard to believe that 19th century observers didn’t enjoy “blaming the victim.”  One insta-theory I’ve come up with is that the observers hadn’t yet sworn allegiance to the new industrialists.  Or maybe they had different just-so-stories to explain the plight of the poor.

The fun thing about science is that we demand something of our models.  If you say that talent (or knowledge capital) is an important contributor to inequality that’s fine.  But if your want to cast that into a scientific theory then you need to say a bit more.  You need to predict something that would aid in it’s falsification.  Piketty provides a hint at that.  If since the 19th century knowledge in the hands of highly skilled and educated labor has become more valuable one might expect that the share of total GDP captured by talented labor would increase.  You can measure that.  He has.  It hasn’t.  The share of capital v.s. labor has remained stubbornly constant over the 20th century.

This is not to say that knowledge is unimportant.  But, I recall a little story Warren Buffet is said to have told about how his executives would come and pitch the advantages of some wonderful new machine.  It would lower costs!  It will raise profits!  I need capital to bring it on board.  Look at this great spreadsheet, what an awesome investment this is!  An then by the time it had come on-line the competitors would all have something similar.   Sadly the return on investment was then lame.

The rising tide of knowledge lifts the economy.  The resulting name can be deadly or delightful for individual players.  But the discovery that the talent’s share of GDP v.s. that of capital has remained constant means it does not help the poor.  Or to put it another way it doesn’t change the shape of the inequality curve.

If your going to explain wealth and income inequality your gonna need another story.

So, back to that video.  It would be fun to have a little museum of magical knowledge transfer devices.  I recall, as a child, that you could buy a little speaker to slip under your pillow and by recording stuff  you could magically learn stuff.  Hypnosis has always promised to provide this, and these days we have MOOCs.  That video shows a cartoon machine from the 1960s the “Brain Impulse Galvanoscope Record & Transfer”, or BIG RAT.  Did I mention rat race?  I’m greatly amused at the idea that these machines are technical efforts to create exchanges for knowledge like those for money.  I don’t think it comes as any surprise that Google is attempting to be that middleman.

Piketty #1: inequality comes home

I’m so delighted.  Piketty’s work is just amazing!

Piketty has a nice prologue he uses on many of his talks.  Once, in the 19th century, the distribution of wealth was well inside the Overton window.  Why did it fall out?

He gives us a hypothesis.  The 19th century consensus predictions were apocalyptic (consider Marx) and when these didn’t happen the entire discussion fell out of favor.  Let’s ignore the great depression and the 2nd world war shall we?

Then, early in the 20th century a modicum of research showed a slight decrease in the problem in the US.   That under minded the foundations.  So that the dominate topic: democratic v.s. communist governance could stepped up and toss our hero out the window.  At that point if you raised the topic the discussion immediately became “communism!”

Of course old habits die hard, but I guess the fall of communism allows the question a chance to come back into our discourse.

So three things drove the problem out the window: the failure of the apocalyptic predictions, a small modicum of research/data, and displacement in the discourse by the democracy v.s. communism governance dispute.  It’s fun to flesh this out, but it’s just his prolog.

He uses the prologue to say two things to his audience.  First it given them the license of welcome the discussion back.  And secondly he can promise to behave.  No apocalyptic visions for him!  It’s a nice story arc isn’t it?  It’s the classic: hero cast out of his home later returns having learned a life lesson.  I particularly like the little ploy so common in horror movies of having the dead hand of a monster rise from the grave.