Author Archives: bhyde

MOCL demo

When I first heard about MOCL a few years ago I was pretty sure it wouldn’t survive, but it looks like I was wrong.  See this nice video that the folks at wukix.com have recently posted.  It’s an impressive 15 minute video demo of using MOCL to author in Common Lisp an application targeted to IOS.

Love that remote REPL for debugging your application!

MOCL has a fair amount of extended syntax so it can play nice with Objective C.

I’m surprised they don’t have a free demo version.  But then, I’m a cheapskate!

So, go watch the video 🙂.

Piketty #5: Who Owns your Country?

Heres something I did not know or expect.  Early in the book Piketty is laboring to help his readers to develop intuitions about income and wealth.   In the colonial era the European nations controlled a lot of productive assets in other nations.  The income from those contributed “nicely” to their national income.  Heck, that’s almost the definition of colonialism. These flows, obviously, weren’t balanced.   Income flowed from the colonies back to Europe.  Little flowed back.  Piketty would like his readers to know that’s not true anymore.

My entire life people have worried that some nation is are buying up America.  Which nation keeps changing though.  To me it has always seems only right that the former Colonial powers would live in fear that the tables might turn   We used to worry that the Japanese were buying up all the good real estate.  We worry now about the Chinese.   Sometimes we worry about oil rich nations.  I was unaware that the French are concerned that California pension funds.  .

There is lots of cross border ownership.  But on the whole both the income flows and the ownership tends to balance out on a pairwise basis.  I did not know that.  I would not have predicted that.  I am not sure I believe it.  I’m optimistic the book will say more.

I’m enjoying his efforts to build out my intuitions about this stuff.  Some examples…

FYI – National income isn’t the same as Gross Domestic Product.  GDP doesn’t include these the cross border income flows.  It also doesn’t include depreciation.  Which I gather we can roughly estimate as 10% a year.  I knew this only in the consequence that recessions are typically followed by an uptick in durable goods.

The pool of wealth is about 50% is residential real estate.

Economists have a rule of thumb that a third of the income flows back to capital as the return investments.   The rest to labor.  But that rule of thumb is likely bogus.  I’d be tempted to say it’s almost a political statement.  But Piketty is more polite, it is based on thin data.

The comment that cross border flows of income tend to balance out these days gives us license to demote that kind of income as a key part of the overall analysis.

What we consider to be wealth is a social construct.  Slaves used to be included.  And we know how the sports team owners feel about “free” agency.  Knowledge is fraught example these days.  It’s thought provoking how much these constructs can change over time.

I particularly liked his aside that when debating the shifting the consensus about what can be owned the agents of owners talk about efficiency, rather than self interest.  People still regularly suggest that slavery is more efficient than a life of poverty.  I’ll note that talk of innovation has starting to displace talk of efficiency.

Public wealth, in the hands of nation states, is close to zero.  Which means we can largely ignore public wealth when thinking about the total wealth of a nation.  He promises to say more about wealth in the hands of non-profits.

Early Advantage

Founder reports successful IPO to his VCs.

Founder reports successful IPO to his VCs.

Each year in Boston we have a road race, a marathon.  All I knew about marathons before I moved here I learned in grade school.  It’s bad news: you die at the end.   But in compensation you get remembered as a mythic hero.   Now that I live here I picked up some random knowledge.  For example it is important not to sprint out into an early lead early.   You gotta pace your self.   Hearing that again this year I thought: yeah! that’s not the advice we give to startups.

Instead we counsel that you gotta get momentum early.  I recall that Steve Jobs advised the Segway team that they needed a huge PR campaign to launch if they ever hoped to trigger the kind of change they imagined.  They didn’t take he advice, and they didn’t trigger the change.  So, see!

Information cascades are one of the many processes that generates power-law distributions. Kieran Healy reports on some fun research into this.  The researchers hacked the early days of a random sample activities at Kickstarter, Epinions, Wikipedia, and Change.org. They blessed some projects with in a small way and then waited to see if how much it helped.  It helped.

Kieran is a professional and his commentary is very astute.  Of course, if you’re interested in hacking, helping, defending systems like these then it’s very nice to have some experiments reported publicly.

Apparently this year’s Marathon winner did take an early lead.  There is a touching story about collective action that possibly explains why that worked for him.  One would assume a marathon is the archetype of an activity immune to collective action, but you’d be wrong.

Touch Labor

Reading about how military spending isn’t particularly good for the economy found this sentence: “Most weapons projects require relatively little touch labor.”  That’s a nice category: “touch labor.”

Touch labor isn’t a widely used term.  In accounting, I gather, labor is sometimes partitioned into direct and indirect.  Managers and janitors are indirect and assembly line workers are direct.

It’s a bad thing? “Deploying and managing IT requires a huge amount of touch labor.”  Or, maybe it’s a good thing?  “Limiting meetings increases the time spent on direct labor.”

As usual I love list of categories: “skilled labor,” “guard labor,” “undocumented labor,” …

Reminds me of the current fad for “makers.”

Spider Chart of Unemployment Statistics

This is an impressive bit of chart from the Atlanta Fed.  It shows a large handful

 

It shows three samples (March 2012, 2013, and 2014) of thirteen metrics for unemployment.  The designers scaled these various numbers by using two reference points: December of 2007 and 2009.  I.e. just before and after the great recession started.  The reference years appear on the chart as two circles: the inner circle is the bad year 2009, and the outer green circle is the good one.

You can see how over the last three years things are getting better, but not much.  The 13 metrics are in four rough categories as indicated by the labels in the corners.  The metrics that tend to suggest the future have improved the most.  The facts on the ground, i.e. while, aka utilization, have not improved much..

The scale selected is good, but it’s worth pointing out that 2007 was fairly disappointing.  Consider this next chart.

That shows the U-6 unemployment rate. The recession that followed the bursting of the Internet bubble is the gray band.  So in 2007 the concern was how we seemed unable to achieve utilizations close to those of the 1990s.  Shortly after this chart ends the U-6 rose to 17%+.

Another critique, I suspect payroll number (and others?)  are not adjusted for population growth.

 

Blackmailed to Markup

The web works because it let’s you reach other people.  That contact is the motive force behind the entire net.  All the other drivers are complements, competitors, and parasites.

Which brings me to the semantic web.  The semantic web at best off by one, and at worse it’s entirely wandering off in the wrong direction.   Where web pages are crafted by people to titillate other people – so it’s no wonder they show up – the semantic web exists to empower machines to excite other machines.  It doesn’t make sense, so people don’t show up.  Well at least not much.

According to the study reported here 1% of all the web has a few crumbs of semantic markup.  In an industry where we expect ideas to start bonfires the best we can say for this idea is that it endures.

But on the other hand.

Maybe this is going to change.  And yeah, I’ve and others have said that before.  But the web isn’t the same as it once was.  We are long past the explosive phase transitioning growth and deep into the consolidation.   It’s less about people now, and more about the machines (though we call it big-data now).  There’s an entire industry (aka SEO), and has been for a while, that labors to make the web pages attractive to the machines rather than people.

So it is with interest that I read that report.  Which suggests that Google is starting to bless pages with semantic mark-up with better placement in search results.  That would be a classic standardization move on Google’s part.  If you have market power you can create incentives (force) suppliers to conform to standards in service of your quality/cost metrics.

I’ve predicted this for a while, and mostly I’ve been wrong – since it keeps not happening.  Maybe I’ll finally be right.

Curiously I’d not noticed before a perversity in my presumption that the only way that the semantic web can succeed is if the big machines force the issue.   And that’s that the open world model so beloved by semantic web fans (me included) is totally at odds with this driver.

 

Piketty #4: Escape from Groupthink

The important point is mainstream economics has difficulty acknowledging work from such sources because to acknowledge is to legitimize. That creates the strange situation in economics whereby something is not thought or known until the right person says it.”

Isn’t that true in any tribe?  It’s not obvious to me how to distinguish, on a day to day basis, when it’s a bad or a good thing.  Though, the list drawn from Group Think isn’t a bad start.

Thomas Palley is suggesting that the economics profession is undergoing a kind of phase change.  That it is comes to grips with the realization that it’s been making many of the mistakes enumerated on that list: Illusion of invulnerability, collective efforts to rationalize, absence of questioning, belief in the the group’s inherent morality, stereotyped views of enemy leaders, direct pressure on any member …, self-censorship, illusion of unanimity, and self-appointed mindguards.

I’m surprised that I’m not aware of any literature, say a cookbook, on how groups escape from the group think.  It’s almost the definition of a group is that it exists to maintain focus; and so the best it can do is drift toward a different focus.  Of course the MBA solution to this problem is leaders, layoffs, reorganizations, and manipulation of incentives – all of which are crude.  And the high-tech version of this is particularly brutal – we let the old firms wither and create new firms from scratch.

I have observed situations where a group slowly loses it’s grip on the consensus delusion.  It only keeps going thru the motions.  The self censorship and mind guards continue to do their work, but it becomes more and more half hearted.  In that context when the layoffs come the level of outrage is tempered.  The group members are then envious of those who jumped ship before the boat’s leaks became so apparent.

emac’s simple-httpd and impatient mode

I found another JavaScript CLI scheme. Skewer which I’ll get around trying sooner or later.  But I got distracted by some adjacent work. Skewer’s scheme for connecting Emac to the java interpreter so it can do remote evaluation etc is to infect the java interpreter with a bit of code that then connects into Emacs using HTTP long polling. Which is good because it works with all the browsers, and is bad because … well actually it’s pretty good. Obviously, that requires that we have a working http server inside of Emacs.

Unsurprisingly people have written http servers in emacs-lisp, one of these is simple-httpd.  It is easily installed from the Melba package repository.

So, what might you do with such a thing? I already mentioned skewer, and I see that somebody wrote a web UI for his emac’s RSS reader.   Somebody else wrote air play server. Those examples are suggestive of what other emacs apps (calc, gnus, magit, erc, etc. etc.) might do. Why? Because they can? I don’t know.

My favorite is impatient-mode, which spontaneously update a web page as you edit your buffer of, typically, html. There’s a video:

You can add filters to the refresh pipeline. So if you want your org-mode files displayed you might refine the filtering scheme. I did this, but I doubt this is the best approach.


(defun imp-htmlize-filter (buffer)
  "Alternate htmlization of BUFFER before sending to clients."
  ;; leave the result in the current-buffer
  (let ((m (with-current-buffer buffer major-mode)))
    (case m
      (org-mode
       (let ((output (current-buffer)))
         (with-current-buffer buffer
           (org-export-as-html 100 nil output))))
      (t
       (let ((html-buffer (save-match-data (htmlize-buffer buffer))))
         (insert-buffer-substring html-buffer)
         (kill-buffer html-buffer))))))

Other things (graphviz?) would be fun too.

emacs, node, javascript, oh-my

Each time I turn my attention to using JavaScript I’m a bit taken aback by how tangled the Emacs tooling is. So here are some random points I discovered along the way.

Of course there is a very nice debugger built into Chrome, and that does a lot to undermine the incentives to build something else in Emacs. I only recently discovered there is a more powerful version of that debugger.

Safari and Chrome, because they have Webkit in common, can be asked on start-up to provide “Remote Webkit Debug” connections. You invoke ’em with a switch, and they then listen (i.e. open -a 'Google Chrome' --args --remote-debugging-port=9222). Bear in mind that this Debug protocol is quite invasive, i.e. it’s security risk. Having done that it’s fun, educational, and trivial to look at the inside of your browser session, just visit . Tools that use this protocol use the json variant rooted at .

One thing that makes the emacs tools for working on JavaScript such a mess is that there are far too many ways to talk to the java instances, and then there are multiple attempts to use each of those. So there are two schemes that try to use remote WebKit debug pathway: Kite, and jss (also known as jsSlime).  I’ve played with both, and have them installed, but I don’t use them much. Both of these are useful in their own ways, I developed a slight preference for jss which has a pretty nice way to inspect objects.   Though I’m on the look out for a good Emacs based JavaScript object inspector.

There is a delightful video from Emacs rocks explaining yet another scheme for interacting with JavaScript from Emacs using swank-js.   What’s shown in that video is a wondrous and a bit mysterious.   The mysterious bit is that it doesn’t actually make it clear what the plumbing looks like.  I’ll explain.

Slime is a venerable Emacs extension originally developed to interact with Common Lisp processes.  It does that via a protocol called swank.  Which means that unlike, for example, emacs shell mode there is a real protocol.  The sweet thing about slime/swank is that it provides a wide window into the process enabling all kinds of desirable things, at least for Common lisp, like:  inspecting object, redefining single functions, debugging, thread management, etc. etc.  In the video you can he’s managed to get a swank like connection into a browser tab and this lets him define functions and dynamically tinker with the tab.

The plumbing is wonderfully messy.  A node.js process acts as an intermediary.  Bridging between swank (for the benefit of Emacs) and a web socket based debugging protocol for that hooks into the browser.  I assume that web socket protocol is similar, if not identical, to the remote WebKit debug protocol.  In the video the Emacs command M-x slime-jack-into-browser establishes the pipeline, and reading that code is enlightening.

A consequence of that design is that the resulting emac’s slime buffer is actually interacting with two processes.  A node.js process and the JavaScript in the browser tab.  You can switch between these.  I find that goes wrong sometimes, and it took me a while to discover the slime command (slime command start with a comma) “,sticky-select-remote”.  If you hit tab it will list the things you might talk too.

The swank-js github instructions are pretty good.  And they explain how to uses swank-js with node.js – though that assumes your reasonably comfortable with node.js already.  I don’t actually follow those instructions.  Instead, after including swank-js in my projects’ dependencies, as the instructions suggest, I require('swank-js') in my main module (only when in a development mode of course).   It’s worth noting that when you then slime-connect to your node.js program you’ll be in the global object.  Your actual program (usually found in the file server.js) will have been wrapped up in a function and hence it’s locals variables are invisible to you.  I work around that by putting interesting state into an object and then do something like global.interesting = interesting.

Recall the remote WebKit debug protocol?  There is a clone of that for node.js known as node-inspector.  Use it!

I have yet to try two other Emacs/JavaScript interaction packages.  slime-proxy and skewer.

If you don’t use slime already you might be able to install these using the usual Emacs package repositories.  I ran into problems with that because I use a very fresh slime/swank and the Emacs package system wanted to bless me with older variants.

Hope this helps.