Category Archives: programming

Docker, part 3

Docker containers don’t support multicast, at least not easily.   I find that a bummer.

It’s unclear why not.  Well, the most immediate reason is that the networking interfaces they create for the containers don’t have the necessary flag to enable multicast.  That, at least according to the issue, is because that’s how Linux defaulted these interfaces.    Why did they did they do that?

This means that any number of P2P or (masterless) solutions don’t work.  For example zeroconf/mdns is out.  I guess this explains the handful of custom service discovery tools.  Reinventing the wheel.

In other news… Once you have Boot2Docker setup you need to tell the docker command were the docker daemon is listening for instructions. You do that with the -H switch to docker, or via the DOCKER_HOST environment variable.   Typically you’d do:

export DOCKER_HOST=tcp://192.168.59.103:2375

But if your feeling fastidious you might want to ask boot2docker for the IP and port.

export "DOCKER_HOST=tcp://$(boot2docker ip 2> /dev/null):$(boot2docker info | sed 's/^.*DockerPort.:\([0-9]*\).*$/\1/')"

establish-routing-to-boot2docker-container-network

Boot2Docker lets you run Docker Containers on your Mac by using VirtualBox to create a stripped down Linux Box (call that DH) where the Docker daemon can run.   DH and your Mac have a networking interface on a software defined network (named vboxnet) created by Virtual Box.  The containers and DH have networking interfaces on a software defined network created by the Docker daemon.  Call it SDN-D, since they didn’t name it.

The authors of boot2docker did not set things up so your Mac to you connect directly to the containers on sdn-d.  Presumably they didn’t think it wise to adjust the Mac routing tables.  But you can.  This is very convenient.  It lets you avoid most of the elegant, but tedious, -publish or -publish-all (aka -p, -P) switches when running a container.  They hand-craft special plumbing for ports when running with containers.  It also nice because DH is very stripped down making it painful to work on.

So, I give you this little shell script: establish-routing-to-boot2docker-container-network.   It adds routing on the Mac to SDN-D via DH on vboxnet.  This is risky if SDN-D happens to overlap a network that the Mac is already routing to, and the script does not guard against that.  See bellow for how to deal if you have that problem.

If your containers have ssh listeners then you can put this in your ~/.ssh/config to avoid the PIA around host keys.  But notice how it hardwires the numbers for SDN-D.

Host 172.17.0.*
  StrictHostKeyChecking no
  UserKnownHostsFile /dev/null
  User root

The numbers of SDN-D are bound when the Docker daemon launches on DH.  The –bip switch, used when the docker daemon launches, can adjusts that.  You setting it in /var/lib/boot2docker/profile on DH via EXTRA_ARGS.    Do that if you have the overlap problem mentioned above.  I do it because I want SDN-D to be small.  That let’s nmap can scan it quickly.

If you’ve not used ~/.ssh/config before, well you should!  But in that case you may find it useful to know that ssh uses the first setting it finds that Host block should appear before your global defaults.

 

Trust

Things like this make me nervous.

A quick install via a bash script pulled from a URL:

curl https://mr-trusty.org/install-fun-thing.sh | sudo bash

New to me: this quick trick to install something into your /usr/local/bin with docker’s help.

docker run -v /usr/local/bin:/target my-trusty/fun-thing

And then we have this handy way to install via emacs…

(url-retrieve
 "https://raw.github.com/mr-trusy/fun-thing/master/fun-thing-install.el"
 (lambda (s)
   (end-of-buffer)
   (eval-print-last-sexp)))

It’s all about short term benefits and hardly about the risks.

Docker, part 2

The San Francisco Hook

I played with Docker some more.  It’s still in beta so, unsurprisingly, I ran into some problems.   It’s cool, none the less.

I made a repository for running OpenMCL, aka ccl, inside a container.   I set this up so the Lisp process expects to be managed using slime/swank.  So it exports port where swank listens for clients to connect.  When you run it you export that port, i.e. “-p 1234:4005” in the example below.

Docker shines at making it easy to try things like this.  Fire it up: “docker run –name=my_ccl -i -d -p 1234:4005 bhyde/crate-of-ccl”.   Docker will spontaneously fetch the everything you need.   Then you M-x slime-connect to :1234 and you are all set.  Well, almost, the hard part is  .

I have run this in two ways, on my Mac, and on DigitalOcean.  On the Mac you need to have a virtual machine running linux that will hold your containers – the usual way to do that is the boot2docker package.  On Digital Ocean you can either run a Linux droplet and then installed Docker, or you can use the application which bundles that for you.

I ran into lots of challenges getting access to the exported port.  In the end I settled on using good old ssh LocalForward statements in my ~/.ssh/config to bring the exported port back to my workstation.  Something like “LocalForward 91234 172.17.42.1:1234” where that IP address that of an interface (docker0 for example) on the machine where the container is running.  Lots of other things look like they will work, but didn’t.

Docker consists of a client and a server (i.e. daemon).  Both are implemented in the same executable.  The client chats with the server using HTTP (approximately).  This usually happens over a Unix socket.  But you can ask the daemon to listen on a TCP port, and if you LocalForward that back to your workstation you can manage everything from there.  This is nice since you can avoid cluttering you container hosting machine with source files.  I have bash functions like this one “dfc () { docker -H tcp://localhost:2376 $@ ; }” which provides a for chatting with the docker daemon on my Digital Ocean machine.

OpenMCL/ccl doesn’t really like to be run as a server.   People work around by running it under something like screen (or tmux, detachtty, etc.).  Docker bundles this functionality, that’s what the -i switch (for interactive) requests in that docker run command.  Having done that you can then uses “docker log my_ccl” or “docker attach my_ccl” to dump the output or open a connection to Lisp process’ REPL.   You exit a docker attach session using control-C.  That can be difficult if you are inside of an Emacs comint session, in which case M-x comint-kill-subjob is sometimes helpful.

For reasons beyond my keen doing “echo ‘(print :hi)’ | docker attach my_ccl” get’s slightly different results depending on Digital Ocean v.s. boot2docker.  Still you can use that to do assorted simple things.   UIOP is included in the image along with Quicklisp, so you can do uiop:runprogram calls … for example to apt-get etc.

Of course if you really want to do apt-get, install a bundle of Lisp code, etc. you ought to create a new container built on this one.  That kind of layering is another place where Docker shines.

So far I haven’t puzzled out how to run one liners.  Something like: “docker run –rm bhyde/crate-of-ccl ccl -e ‘(print :hi)'” doesn’t work out as I’d expect.  It appears that argument pass thru, arg. quoting, and that the plumbing of standard IO et. al. is full of personality which I haven’t comprehended.  Or maybe there are bugs.

That’s frustrating – I undermines my desire to do sterile testing.

 

Docker is interesting

Somebody mentioned Docker during a recent phone interview, so I went off to have a look.  It’s interesting.

We all love sandboxing.  Sandboxing is the idea that you could run your computations inside of a box.  The box would then protect us from whatever vile thing the computation might do.  Visa versa it might protect the computation from whatever attacks the outside world might inflict upon it.   There are many ways to build a sandbox.   Operating systems devote lots of calories to this problem.  I recall a setting in an old Univac operating system that set a limit on how many pages a user could print on the line printer.   Caja tries to wrap a box around arbitrary JavaScript so it can’t snoop on the rest of the web page.  My favorite framework thinking about this kind of thing is capabilities.  Probably because I was exposed to them back at CMU in the 1970s.

Docker is yet another scheme for running stuff in a sandbox.  They call these containers, like a standardized shipping container.  I wonder if they actually took the time to read “The Box: …“, since it’s an amazing book.

Docker is also the usual hybrid open-source/commercial/vc-funded kind of thing.  Of course it has an online hub/repository.  Sort of like the package managers do; but in this case run by the firm.  Sort of like github.  The business model is interesting, but that’s – maybe – for another post.

Docker stands on a huge amount of work done over the last decades by operating system folks on the sandboxing problem.  It’s really really hard to retrofit sandboxing into an existing operating system.   The redesigned thing is likely to have a lot of rough edges.  So – on the one hand – Docker is a system to reduce the rough edges.  Let meer mortals can play with sandboxes, finally.  But it is also trying to build a single unified API across the diversity of operating systems.  In theory that would let me make a container which I can then run “everywhere.”   “Run everywhere” is perennial eh?

Most people describe docker as an alternative to virtual hosting (ec2, vmware, etc. etc.).  And that’s true.  But it’s also an alternative to package managers (yum, apt, homebrew, etc. etc.).   For example say I want to try out “Tiny Tiny RSS,” which is a web app for reading RSS feeds.  I “just” do this:

docker run -name=my_db -d nornagon/postgres
docker run -d --link my_db:db -p 80:80 clue/ttrss
open http://localhost/

Those three lines create two containers, one for the database and one for the RSS reader.  The 2nd link links the database into the RSS container, and exposes the RSS reader’s http service on the localhost.  The database and RSS reader containers are filled in with images that are downloaded from the central repository and caches. Disposing of these applications is simple.

That all works on a sufficiently modern Linux since that’s where the sandboxing support docker depends on is found.  If your are on Windows or the Mac then you can install a virtual machine and run inside of that.  The installers will set everything up for you.

emacs, node, javascript, oh-my

Each time I turn my attention to using JavaScript I’m a bit taken aback by how tangled the Emacs tooling is. So here are some random points I discovered along the way.

Of course there is a very nice debugger built into Chrome, and that does a lot to undermine the incentives to build something else in Emacs. I only recently discovered there is a more powerful version of that debugger.

Safari and Chrome, because they have Webkit in common, can be asked on start-up to provide “Remote Webkit Debug” connections. You invoke ’em with a switch, and they then listen (i.e. open -a 'Google Chrome' --args --remote-debugging-port=9222). Bear in mind that this Debug protocol is quite invasive, i.e. it’s security risk. Having done that it’s fun, educational, and trivial to look at the inside of your browser session, just visit . Tools that use this protocol use the json variant rooted at .

One thing that makes the emacs tools for working on JavaScript such a mess is that there are far too many ways to talk to the java instances, and then there are multiple attempts to use each of those. So there are two schemes that try to use remote WebKit debug pathway: Kite, and jss (also known as jsSlime).  I’ve played with both, and have them installed, but I don’t use them much. Both of these are useful in their own ways, I developed a slight preference for jss which has a pretty nice way to inspect objects.   Though I’m on the look out for a good Emacs based JavaScript object inspector.

There is a delightful video from Emacs rocks explaining yet another scheme for interacting with JavaScript from Emacs using swank-js.   What’s shown in that video is a wondrous and a bit mysterious.   The mysterious bit is that it doesn’t actually make it clear what the plumbing looks like.  I’ll explain.

Slime is a venerable Emacs extension originally developed to interact with Common Lisp processes.  It does that via a protocol called swank.  Which means that unlike, for example, emacs shell mode there is a real protocol.  The sweet thing about slime/swank is that it provides a wide window into the process enabling all kinds of desirable things, at least for Common lisp, like:  inspecting object, redefining single functions, debugging, thread management, etc. etc.  In the video you can he’s managed to get a swank like connection into a browser tab and this lets him define functions and dynamically tinker with the tab.

The plumbing is wonderfully messy.  A node.js process acts as an intermediary.  Bridging between swank (for the benefit of Emacs) and a web socket based debugging protocol for that hooks into the browser.  I assume that web socket protocol is similar, if not identical, to the remote WebKit debug protocol.  In the video the Emacs command M-x slime-jack-into-browser establishes the pipeline, and reading that code is enlightening.

A consequence of that design is that the resulting emac’s slime buffer is actually interacting with two processes.  A node.js process and the JavaScript in the browser tab.  You can switch between these.  I find that goes wrong sometimes, and it took me a while to discover the slime command (slime command start with a comma) “,sticky-select-remote”.  If you hit tab it will list the things you might talk too.

The swank-js github instructions are pretty good.  And they explain how to uses swank-js with node.js – though that assumes your reasonably comfortable with node.js already.  I don’t actually follow those instructions.  Instead, after including swank-js in my projects’ dependencies, as the instructions suggest, I require('swank-js') in my main module (only when in a development mode of course).   It’s worth noting that when you then slime-connect to your node.js program you’ll be in the global object.  Your actual program (usually found in the file server.js) will have been wrapped up in a function and hence it’s locals variables are invisible to you.  I work around that by putting interesting state into an object and then do something like global.interesting = interesting.

Recall the remote WebKit debug protocol?  There is a clone of that for node.js known as node-inspector.  Use it!

I have yet to try two other Emacs/JavaScript interaction packages.  slime-proxy and skewer.

If you don’t use slime already you might be able to install these using the usual Emacs package repositories.  I ran into problems with that because I use a very fresh slime/swank and the Emacs package system wanted to bless me with older variants.

Hope this helps.

 

tidy up the output of lisp macros

For some reason it makes my teeth hurt to have my macros generate code that I wouldn’t have written by hand.  For example it’s not hard to get code like this out of a macro expansion.

(let ()
  (progn
    (if (fp x)
      (progn 
         (f1 x)
         (f2 x)))))

v.s. what I might like:

(when (fp x)
   (f1 x)
   (f2 x))

I probably ought to just relax and ignore it, but instead I often revise macros so the code they generate is nicer to look at.   So that:

`(let ,vars ,@body)

becomes

(if vars
    `(let ,vars ,@body)
    `(progn ,@body))

This is silly!   Now I have ugly macros instead of ugly output.   I’m just moving the ugly bits around.

So I’ve started doing this:

(tidy-expression `(let ,vars ,@body))

where tidy-expression is something like this:

(defun tidy-expression (x)
  (match x
    (`(or ,a (or ,@b)) `(or ,a ,@b))
    (`(progn ,a (progn ,@b)) `(progn ,a ,@b))
    (`(progn ,x) x)
    (`(and ,a (and ,@b)) `(and ,a ,@b))
    (`(if ,a (progn ,@b)) `(when ,a ,@b))
    (`(if ,a (progn ,@b) (progn ,@c)) `(cond (,a ,@b) (t ,@c)))
    (`(if ,a (progn ,@b) ,c) `(cond (,a ,@b) (t ,c)))
    (`(if ,a ,b (progn ,@c)) `(cond (,a ,b) (t ,@c)))
    (`(let ,vs (progn ,@body)) (tidy-expression `(let ,vs ,@body)))
    (`(let nil ,@body) (tidy-expression `(progn ,@body)))
    (_ x)))

It’s another chapter in my crush on optima.

I write these tidy up functions as necessary.

That example only chews on the top of the form.   If you wanted something to clean up the first example you’d need to write tidy-expression-all.

(tidy-expression-all
 '(progn
   (if (fp x)
       (progn 
         (f1 x)
         (f2 x)))))
-->
(when (fp x) (f1 x) (f2 x))

This all reminds me of Warren Teitelman’s programmer’s assistant in Interlisp.  It reminds me of some of the things that flycheck in Emacs does for other programming languages.   It reminds me that I’ve been wondering what would a lint for Common Lisp would look like.

I bet somebody already wrote a generalized tidy expression and I just don’t know were to look.

Lint for your shell scripts!

ShellCheck will critique your shell scripts, i.e. it’s a static checker for Bash etc.  You can try it online at shellcheck.net.  You can also install it on your own machines.

If you combine it with emac’s flycheck it is particularly delightful!

I had trouble installing it locally.  I believe this recipe works.  It’s written in Haskell, so first your install Haskell’s package manager cabal.  I used brew cabal-install.  Then do cabal update.  At that point you clone shellcheck from github (git clone https://github.com/koalaman/shellcheck.git) and cd into the resulting directory cd shellcheck and run cabal install. Sometime later ~/.cabal/bin/shellcheck appears, and you can set flycheck-sh-shellcheck-executable so flycheck can find it.

Wolfram Alpha

Watching these into to Wolfram Alpha videos (for example) give me a certain arousal that I have experienced only a handful of times.  The first time was when I discovered the APL prompt hidden inside the Dartmouth Basic CLI on the Teletype in high school.  Then there was Focal on the PDP-12 which let you draw charts on the machine’s CRT.  Latter there was discovering that I could run Mathematica at MIT over the Arpanet from CMU.  And, of course, the Lisp Machine.  I don’t know what it is about these systems that gets me excited.  R, Prolog, Erlang, Emacs all got close but never managed to trigger this curious arousal for me.

Wolfram Alpha is a lovely example of what is now possible.  The Lisp Machine was so very similar, particularly with its UI, to what that demos is showing.  But what’s new is the amount of data and algorithms we can now bring to bear.  Where are the engineering stations that treat the entire web’s as a reasonably well-organized dataset?  That is/was the RDF fantasy.  It’s nice to see it starting to pop-up up.

Of course it’s unlikely that I’d invest a lot in climbing the learning curve of a system like this, given that it’s proprietary.

Update: Article in Slate, and a classic on Wolfram’s ego and many other flaws.

,@ v.s. ,.

I’m surprised that I didn’t know about the ,. construct in Common Lisp’s backquote syntax.  It is equivalent to ,@ except that it licenses the implementation to destructively modified the tail of the list being inlined.

cl-user> (defparameter *x* '(1 2 3))
*x*
cl-user> *x*
(1 2 3)
cl-user> `(a b ,@*x* c d)
(a b 1 2 3 c d)
cl-user> *x*
(1 2 3)
cl-user> `(a b ,.*x* c d)
(a b 1 2 3 c d)
cl-user> *x*
(1 2 3 c d)
cl-user>