Category Archives: programming

Plot-Window

I’ve been playing with Parenscript and Websockets.  I’ve made a small useful thing.  Plot-window can be used to plot data from your Common Lisp repl.  It displays the plot into a web page; you leave this web page up as you work, the plot function revises the web page on demand.

Here’s a little screen cast. That shows how to clone it from github, load it up, start a little embedded webserver, open the display page in the browser, and then finally we make a makes a few plots from the REPL.

The actual charts are rendered by Flot, one of many Javascript charting libraries.  So you can actually make many many different kinds of charts.  (FYI Liam Healy has a posting about a more traditional approach to Lisp charting.)

Finally, this short video is a preview of how this might be extended to use a web browser as a generalized display for your lisp process, in this case a Parenscript form is evaluated in emacs which builds and animates a page (using D3JS and SVG).

Old Habits

I’m enjoying Planet Lisp’s feed of new Lisp projects at github: http://planet.lisp.org/github.atom.

Long long time ago I fell into a coding convention.  Most of my little projects have a file where I define the package (or packages); and this file is loaded first.  Most projects I see at github follow this style.  But a few don’t.  Further I never ever switch packages inside of a given source file.

I think it’s time to set aside these habits.

I’m reasonably confident that both these habits arose because of Emacs limitations.  Back in the day it wasn’t particularly clever about handling the package.   I don’t think I ever worked in a version that was so limited that it required the package to be asserted in the mode line, but I certainly have worked on code bases were every file asserted the package twice, at the top, once in the mode line and once via in-package.

I certainly worked in variants of emacs that had firm limits on the in-package form; i.e. that it appear early in the file, and that you not switch packages.

The only reason this worth stating out loud is because I see a lot of little projects that consist of three files:   my-project.asd, package.lisp, and my-project.lisp.   That pattern is, I think, obsolete.  There really isn’t a good reason anymore for the package.lisp file for simple little things.

  (in-package #:cl-user)
  (defpackage #:my-little-package
    (:uses #:common-lisp))
  (in-package #:my-little-package)

  (defun my-awesome-hack ()
    ; ...
  )

While I am waffling about the value of including a mode line at this point, as shown I’m leaning toward eliminating it too.

WDYT?

What’s new in Quicklisp

A new Quicklisp release is out.  “New projects: cl-arff-parser, cl-bayesnet, cl-libpuzzle, cl-one-time-passwords, cl-rrt, cl-secure-read, function-cache, gendl, sha3, trivial-raw-io, yaclanapht. … Updated projects…”

I spent a few minutes looking at the new ones, so here’s a bit more info…

cl-arff-parserhttps://github.com/pieterw/cl-arff-parser#readme (BSD?)  Reader for ARFF (Attribute-Relation File Format) file, an ASCII file that describes a list of instances sharing a set of attributes.

cl-bayesnethttps://github.com/lhope/cl-bayesnet#readme (LLGPL)
a tool for the compilation and probability calculation of discrete, probabilistic Bayesian Networks.

cl-libpuzzlehttps://github.com/pocket7878/cl-libpuzzle#readme (LLGPL)
A foriegn function bridge to libpuzzle, a library for finding similar pictures
see also: http://linux.die.net/man/3/libpuzzle

cl-one-time-passwordshttps://github.com/bhyde/cl-one-time-passwords#readme (apache2)
implementation of the HOTP and TOTP standard as used in google authenticator and others for 2-factor authentication.

cl-rrthttps://github.com/guicho271828/cl-rrt#readme (LLGPL)
a … multidimentional path-plannning algorithm… use[d] in robotics … car drivings …

cl-secure-readhttps://github.com/mabragor/cl-secure-read#readme (GPLv3)
Based on the “Let of Lambda” secure reader.

function-cachehttps://github.com/AccelerationNet/function-cache#readme (BSD)
an expanded form of memoization

gendlhttps://github.com/genworks/gendl#readme (AGPL)
A big Generative Programming and Knowledge Based Engineering framework. Previously known as genworks-gdl.

sha3https://github.com/pmai/sha3#readme (MIT/X11)
Implementation of the Secure Hash Algorithm 3 (SHA-3), also known as Keccak

trivial-raw-io – https://github.com/redline6561/trivial-raw-io#readme (BSD)
… export three simple symbols: with-raw-io, read-char, and read-line

yaclanaphthttps://github.com/mabragor/anaphora#readme (GPL3)
Improvement/fork of Nicodemus Siivola’s ANAPHORA, with license change.

My Common Lisp Buildpack for Heroku

Meanwhile, I reworked an existing buildpack for my needs.  It’s very easy to try it.   Assuming you’ve signed up for heroku following their quickstart instructions then you just:

curl https://gist.github.com/bhyde/5383182/raw/gistfile1.txt | bash

That will create a directory on your machine with the example application’s sources, over on Heroku it will build and launch that application, and finally it will open your web browser visiting the home page of the application.

herokuCCL64

This is all free.

This application is written in Common Lisp.  There are lots of nice open source Common Lisp compilers, in this case it’s using Clozure Common Lisp.  The sources of the app amount to 16 lines of code.  Another few lines implement the hook used by the buildapp to compile the application.   Tiny applications like this are made possible thru the excellent build and library support in the modern lisp community; so most of the meat is defining how to use them (i.e. the system definition).

It’s interesting that none of the above installs common lisp on your own machine, nor does it check out the buildpack I built.  In total it adds ~200K to your local machine, most of which is the git repository.  The actual sources are about 14K, of which the image is 13.

To undo the above you need only delete the directory it creates and destroy the application on heroku: heroku apps:destroy <name>.

Heroku Buildpacks

buildpacks

Heroku is a cloud computing platform, i.e. a place where you can run your applications.  When you author an application on Heroku it is split into two parts.  One part, the buildpack, is responsible for building your application; while the other part is the actual application’s sources.

I found this interesting.  Modularity always serves to separate concerns.  What concerns, and who’s concerns, are natural next questions.  Buildpacks make it easier to get started with Heroku; so they address one of Heroku’s concerns – i.e. “How can we get people to use our product.” – by lowering the barrier to entry.

There are many buildpacks.  Heroku provides a half dozen, and other people have build a few hundred more.  Each build packs makes it easier to get an app based on some programming framework started.  For example there are ones for: ruby, node.js, erlang, wordpress, common lisp, go, etc. etc. etc.

Of course how exactly any given application get’s built tends to quickly become unique.  I suspect that most serious app developers customize their buildpacks.  Heroku makes extensive use of git to move things around, so naturally a lot of the buildpacks are on github.  I got thinking it would be interesting to look at some statistics about them.

The chart above has one dot for each of N buildpacks (in some cases multiple buildpacks have landed on the same dot).  Notice that the chart is log-log.   The vertical axis indicates how many +1 like stars the buildapp has received – so that’s a proxy for how popular it is.  The horizontal axis shows how often the buildapp has been forked.

In one’s fantasy the perfect build app for a given programming platform would fufill all the needs of it’s app developers.  In that case it would never need to be forked.  But three things work against that fantasy.  First off the buildapps aren’t perfect, in fact they tend to simplistic because their short-term goal is to make it trival to get started.  Secondly – I think – the building of applications tends to sport all kinds of unique special cases.  And finally the usual third reason – it’s hard work to push patches around, so even if you improve a given buildpack the chances your enhancments flow back to the master buildpack are – well – iffy.

Anyhow, I wanted to share the picture. (oh, and zero was mapped to .1)

Compass and Straight Edge

EuclideProgramming languages often have a juicy core of one kind or another.  Back in the 60s and 70s we had a lovely assortment of languages each of which took some particular idea to heart and then ran as far was they could with that idea.  SETL – using sets – is a good example.  It was a lot of fun to write within it’s framework.   I still my  delight when, at one point, they managed to get the compiler’s optimizer to the point where it spontaneously discovered assorted famous graph algorithms.

Other examples include:

  • SIMULA – using what we’d now call lightweight threads.  
  • SNOBOL – centered around pattern matching which informed a whole tangle of other languages like SL5, and Prolog, and such.
  • LISP – with its symbols, lists, etc. etc.
  • APL – with its arrays
  • etc. etc.

There others that stand atop a big data structure; SQL, Emacs, and AutoCAD.

Some stand on an unusual computational model.   Rule based (truth maintenance?) systems like Unix make or Prolog.   The constraint based systems.  Lazy evaluation.

A very few are almost only about some syntactic or semantic gimmick; like forth, postscript, or Python.

Is that era is largely over?   The search space has been mined out?  I guess some work on genetic programming or machine learning are the modern decedents of this style of language design.

All these languages have a kind of inward looking quality to them.  They don’t really care much about their users.  If there are applications, well that’s nice.  Their enthusiasm is rooted in their the juicy (often somewhat eccentric) center, not the tedium of actually putting them to use.  To a greater or lesser degree you can make that critique about all programming languages.

DSC02333Which brings me to a last night.  Harry Mairson gave a nice little talk to the Boston Lisp meeting about a spin off of his hobby – which is making string instruments.

Apparently we don’t actually have a good handle on how our ancestors designed and built their instruments.  Insta-theories might include that they traced existing instruments, or maybe they had templates they handed down, etc. etc.

One recent theory is that they had recipes that guided the making of patterns using only compass and straight edge.  There is a book that makes this case.  “Functional Geometry and the Traité de Lutherie.”

When Harry found and read this book he got to wondering if the descriptions in the book might be converted into something more algorithmic.  He spun off a little language where the juicy core was a compass and straight edge.  … time passes … and now he can write programs that almost sketch out the designs for cellos and such.   It was an awesome, eccentric, fun talk.

It’s notable that he did this backward.  He started from the application and ended up with a cute new language based on a curious juicy center.

I found myself wondering.  To what extent the design languages used by craftsmen in the Middle Ages rested on the compass and straightedge.  Architecture?  Furniture?  Music?  Here’s graphic I found showing a bit of (presumably modern) font design.

NayeraAbusteit-ArabicTypeface-2013

His work is not yet published, so all of you who are suddenly tempted to write a web server using only a compass and straight edge are best advised to wait until it is.

Update: Cool, there is now a paper you can read.  I look forward to seeing your web servers.

When Flags are Necessary

flag_makingBack in January Jeff Hodges wrote a wonderful essay: “Notes on Distributed Systems for Young Bloods.”  There is a lot of good stuff in there.  I was remembering this recently because this section highlights something I’d not thought so clearly about; i.e. that feature flags run contrary to the usual software engineering best practice – as explained in the next to the last paragraph of this snippet:

“Feature flags are how infrastructure is rolled out. “Feature flags” are a common way product engineers roll out new features in a system. Feature flags are typically associated with frontend A/B testing where they are used to show a new design or feature to only some of the userbase. But they are a powerful way of replacing infrastructure as well.

Suppose you’re going from a single database to a service that hides the details of a new storage solution. Have the service wrap around the legacy storage, and ramp up writes to it slowly. With backfilling, comparison checks on read (another feature flag), and then slow ramp up of reads (yet another flag), you will have much more confidence and fewer disasters. Too many projects have failed because they went for the “big cutover” or a series of “big cutovers” that were then forced into rollbacks by bugs found too late.

Feature flags sound like a terrible mess of conditionals to a classically trained object-oriented developer or a new engineer with well-intentioned training. And the use of feature flags means accepting that having multiple versions of infrastructure and data is a norm, not an rarity. This is a deep lesson. What works well for single-machine systems sometimes falters in the face of distributed problems.

Feature flags are best understood as a trade-off, trading local complexity (in the code, in one system) for global simplicity and resilience.”

I think that helps to explain why the things that get built out over time so often seem to be in such desperate need of a total rewrite.  The litter left behind by the search (a/b testing) and the vistigal organs of the switch overs all frustrate some people’s sense of good design.  The design patterns at one scale or dimension are at odds here those of another.

Coder’s Asceticism

In poetry class I learned that performing inside a straight jacket can, surprisingly, work out pretty well.  Drawing with a peculiar pen, working in an unusual medium, or venue … all these can work out surprisingly well.  You can go too far though.    Knowing that an abundance of choice does not make us happy is not an  argument  for eliminating all choice.

“Moderation in all things” is how Patrick Stein sums up a post on trying on a few programming straight jackets.  No function over five lines.  No package (we are talking Common Lisp here) spread over multiple files.  Unusual coding conventions for your package namespaces.  He is having fun.

Seems to me that a rule on the maximum size of a function is silly.  Surely the question is what is the right distribution of function sizes – i doubt it’s normally distributed.  I’d think a good rule might be that a function’s size should signal something about it’s complexity.  Isn’t function complexity mostly independent of program modularity? I.e., modularity alone can cause functions to fragment.  It is a common fetish: “small is beautiful.” And, that helps to explain why I’ve never heard people advocating against small functions.  I’ve often encountered code that seems scattered into a thousand tiny  pieces, it becomes incomprehensible.  That is certainly the wrong thing to do, except when it isn’t.

Originally Written?

I wonder why Apple included the word “originally” in “Applications must be originally written in Objective-C, C, C++, or JavaScript”?

If you are building a platform of unhackable devices then you need to control the gateways to hacking; i.e. the tool chain and the application distribution channel.  So the clause above’s purpose is to due just that; e.g. if you want to hack code to run here you gotta pass thru our  interpreters, our compilers.    But they wrote something stronger.  They wrote is that you have to write in language A,B, C, or D.  Only.    Your not allowed to write in language X, Y, or Z – even if you cross compile X, Y, or Z into A, B, C, or D before you deliver.  I don’t get it

For clarity: this means you may not write a program in say Python, and then run that program’s source text thru a Python to C translator before compiling it before submitting for Apple’s approval.

One of the things that lead me to Lisp was noticing how most of the programs I wrote weren’t in the language the compilers offered me, but instead we would design a custom language better suited to the task at had.  For example the rule system that drive a compiler’s optimizations, get some delicate thing to work on multiple systems, or the custom notations that generate the object system.  Once I accepted how central that is to software architecture, I wanted a language that respected and encourage that.

I’m not terribly surprised that Apple has decided to run the audacious experiment of creating a platform that strives for devices are no longer computers once in the hands of the end user.  Devices which the end users can not hack upon.  I think it’s kind of vile and it worries me that they may succeed, but it doesn’t surprise me.  I should write another post about how potent such a platform will be if they can sustain it.

They aren’t the first to go down this path.  It’s what the locked phones do.  This is what some of the game consoles do.  And I guess if you fully embrace my presumption that the right metaphor for modern websites (aka applications) is that they are massively multi-player games, then it’s only  inevitable  that game  industries  habits would move in the rest of the application stack.

By way of amusement this clause would appear not prevent you from doing a less interesting but still common architectural trick.  I.e. where you write in a dialect of language A, call it A’; and then cross compile from that into A before running thru their tool chain.  So if you can cast your need for a domain specific language into a dialect then you can sort of wiggle around the restriction.  To my surprise there is an important example of this, Caja.  Caja is a dialect of Javascript with some extremely desirable security features.  It’s used by Yahoo and Orkut (for example) to assure that third party widgets are tightly constrained in the damage or snooping they can do to other portions of the web page.

And I guess there is always C++; you can do some very wonky things at compile time in C++.

n2n

n2n is a nice peer to peer vpn. Here are some hints, mostly so I’ll remember them.

There is a minor bit-o-confusion on the Macintosh. The edge nodes all use tun devices, rather than real ethernet devices, to plug in. You’ll need to install tun devices by hand. Then these devices will not show up in the various System Preferences. Don’t worry about that.

The n2n processes (both edge and supernode) will report status via UPD if you poke a UPD packet into at 127.0.0.1:5645 (aka localhost:5645) as so:


$ echo """" | ncat --idle-timeout 1s --udp localhost 5645
----------------
uptime    1212
edges     2
errors    0
reg_sup   21
reg_nak   0
fwd       0
broadcast 76
last fwd  25 sec ago
last reg  5 sec ago
Ncat: Idle timeout expired (1000 ms).
$

You will probably need to install ncat, which is part of nmap.

Each edge node in an n2n community’s pseudo ethernet needs a MAC address. Analagous to private IP addresses there are private MAC addresses. This mess will gin up a stable MAC address for your edge node based on the first mac address found on your machine.


N2N_FAKE_MAC=`ifconfig -a | awk '/ether/{print $2}' | head -1 | sed 's/^..:..:../10:00:00/'`

If you want an edge node to route all traffic thru you community’s VPN and then out to the rest of the network you need to do two things. Some edge node needs to volunteer to act as a gateway and each client that wants to use that gateway needs to configure their routing appropriately.

First, gateways typically run natd. Happily on the Mac you need only enable internet sharing in the sharing control panel to get that going.

Secondly, edge nodes that want to route over the VPN and out that gateway to the rest of the internet will then need to mess with their routing tables. That’s risky; mess up your routing table and you lose connectivity. You can find out what the default route for packets is by asking:


route -n get default

Note the result down since you’ll need it to switch back.

You can change the default route by doing (presume for a moment that your gateway node is running at 192.168.13.1):


route change default 192.168.13.1

But wait; that will break your N2N vpn, because your traffic to your peers will try to flow thru the new default. So you need to add specific routes to the supernodes and other edge nodes first. I don’t know how to get the list of edges; so I set them up by hand.

You switch back by resetting you default route, and tearing down the one off routes to other n2n nodes. Of course if, all else fails, reboot. Your on your own.

You can see the entire routing table by doing: “netstat -nr”.