Category Archives: programming

nss_mdns

To add to a Freebsd machine the ability to resolve domain names managed by zeroconf you can install the port for nss_mdns, and then modify the file /etc/nsswitch.conf. The modification should insert the token mdns on the line in nsswitch.conf that configure the search order for hosts:. I inserted it just after file.

I wanted mdns so my backup server could refer to the laptops by the names they inject into the mdns information cloud.

The port doesn’t provide the latest version (today, i think) which makes avahi complain that it’s not installed. Avahi is a mdns server, and you only need avahi if you want to publish additions information into the multicast dns cloud; or as I want to do reflect mdns accross subnets; think vpn. The libraries are so.1, not so.2; which the avahi codes desires; leading it to complain: “WARNING: No NSS support for mDNS detected, consider installing nss-dns!”

While I was a bit puzzled that the way to add mdns support to your environment was to inject something into the C library, which is what NSS does, I’m getting over it. At this point it almost seems like the right approach.

Update: Sigh, it stopped working … i have no idea.

A Doctoral Thesis is not a Standards Specification, but…

I’ve greatly enjoyed much of Richard Gabriel’s writing over the years.  Though I’ll admit I haven’t read anything he’s done in the past few year.  In any case I happened to I listened to this interview he gave at OOPSLA to Software Engineering Radio.  The interviewer wanted to learn about this thing, Lisp, and he asks a series of questions to dig into the matter.  While for me this was pretty dull Richard does tell retell a story I’d not heard in recent years.  That got me to thinking about a model of how ideas used to flow from the academic research labs into programming community at large; and in particular how the Lisp community didn’t use standards in quite the same way as other language communities.

Lisp is a great foundation for programming language research.  It is not just easy to create new programming frameworks in Lisp.  The pie chart of where you spend time building systems has a slice for framework architecting and engineering.  Lisp programmers spend a huge portion of thier time in that slice compared to folks working in other languages.  In Lisp this process is language design, where as in other languages it’s forced into libaries.  There is a tendency in other languages for the libraries to be high cost, which makes them more naturally suited for a standardization gauntlet.  In Lisp it’s trivial to create new frameworks and they are less likely to suffer the cost and benefits of becoming standardized.

You get a lot more short term benefit in Lisp, and you pay latter as sweet frameworks fail to survive.  They don’t achieve some level sustianance because they don’t garner a community of users too look after them.

Back in the day this was less of a problem.  And thereby hangs the tail that Richard casually mentioned.  He was sketching out how a pattern that was common during the AI’s early golden age.  Graduate students would aim high, as is their job, and attempt to create a peice of software that would simulate some aspect of intelegence – vision, speech, learning, walking, etc. etc. – what aspect doesn’t really matter.  In service of this they would create a fresh programming language that manifested their hypothisis about how the behavior in question could be manifested.  This was extremely risky work with a very low chance of success.  It’s taken more then fifty years to begin to get traction in all those problems, and back in the day computers were – ah – smaller.

Enticing graduate students into taking huge risks is good, but if you punish them for failing then pretty soon they stop showing up at your door.  So you want to find an escape route.  In the story that Richard sites, and which I’d heard before, the solution was to give them a degree for the framework.

Which was great.  At least for me.  All thru that era I used to entertain myself by reading these doctorial thesis outlining one clever programming framework after another.

What’s facinating is that each of those acted as a substitute for a more formal kind of library standardization.  They filled a role in the Lisp community that standardized libraries played today in more mainstream programming communities.  This worked in part because individual developers could implement these frameworks, in part or if they were in the mood in their entirety, surprisingly quickly.  These AI languages provided a set of what we might call programming patterns today.  Each doctoral thesis sketched out huge amount of detail, but each instance of the ideas found there tended to diverge under the adaptive presure of that developer unique problem.

So while a doctoral thesis isn’t a standards specification it can act, like margarine for butter, as a substitute.  Particularly if the consumers can stomach it.  Lisp programmers like to eat whole frameworks.

Fun and Lazy.

I love reading Lisp blogs; for example in this posting we have a sketch of how to define a function G that you can use when the debugger decides you neglected to write a function F. You then bind G to the F’s function definition and tell the debugger to try again. I can’t begin to imagine how you’d do that in most languages. In Common Lisp is not even that unusual; I’ve certainly done things along these lines.

Going back into the late 1970s I’ve had a joke about how lazy evaluation can be taken to extreme lengths. You build your operating system and wait until one of your developers actually writes some code. That code invokes a operating system call. You discover you haven’t written that call yet (in fact you haven’t written anything yet). So you call the routine who’s job it is to see that code get’s written. Oh my that’s not written yet either. So the error handler for that invokes the get programmer assigned handler … which invokes the project manager handler … which invokes the HR hiring manager handler … Later when this all unwinds the program just works. So damn it, just ship it!

Bookmarking Chatrooms in Adium

This took me forever to puzzle out, so here you go. This explains how to avoid having to fill out the stupid dialog every time you want to join a chat room in Adium. Adium is a very nice IM client on the Macintosh, much nicer than iChat.

Most (all?) the instant messaging networks also support chat rooms (sometimes called conferences, and sometimes rooms). Adium has a command for joining these chat rooms “Join Group Chat” in it’s file menu. That command pops up a complex dialog what you then fill out.
All this is good up until you discover that filling out the complex dialog is tedious and your doing it over and over again so you can stay in your chat rooms.

So here is the trick. There is a tool bar gadget to create new “buddies” for your chat rooms. It is not enabled by default. So. Once your in a chat room you need to: show the tool bar if it’s hidden; customize the tool bar; add the “bookmark” tool into your tool bar; and then use that to create a pseudo buddy. From then on in you can join a chat room by clicking on that room’s pseudo buddy.

Now, if I can only figure out how to tell these to automatically connect & reconnect.

For example there are chat rooms for each US weather district; and if you enter one of them you’ll get all the weather alerts for that district automatically. These are found in the jabber IM network at the jid muc.appriss.com, for example the boston district is known as box; and it’s room is zzboxchat. As far as I know jabber’s xmpp: uri scheme doesn’t include a syntax for denoting these chat rooms.

maintaining a freebsd install

These notes are almost certainly wrong, you’ve been warned.

First off you need to decide what version of FreeBSD your running. When I last did this I decided that 6.2 was the safe choice. 6.3 had a few release cantidates, but wasn’t actually released.

6.2 is a pain to install because after you install it you need to upgrade the X11 support and that involves hand work to get over a major transition in the X11 system. This is documented in the ports upgrading file. You’ll want to do that before you start installing too many ports that depend upon X11.  (Oddly I also must set ForwardX11Trusted).
Next up is the problem of getting the security patches installed, and that effects all the OS, the core utilities, and the ports; making for another reason to temper your enthusiasm for installing ports early. This step is eternal, particularly if the machine will be exposed on the open internet; and really one way and another – what isn’t?
There are tools to help, but they are in various states of current. In fact there are two many tools and something of a lack of guidance about which ones to use in which situations.

Freebsd-update is a help for the kernel. It’s a port, so you’ll need to install it. Then you need to wire it up so that it polls from crontab regularly for security patchs.

Portsnap, portupgrade and portinstall are a good. You may wish to wire portsnap into crontab so your informed of the freshest ports. (There seems to be something odd with my portupgrade, I have two /usr/ports/{ports-mgmt,sysutils}/portupgrade. I think happened early on as part of running portsnap. I believe I had to slam the one from ports-mgmt in over the one I’d installed earlier. Note that this happens in the middle of getting X11 upgraded.)
Portaudit, another port, needs to be installed so your informed when your ports have security issues.

Then you have to apply the patches freebsd-update is telling you about by hand, and you have to make thoughtful choices about how closely you track the latest and greatest ports. Of course upgrading to fix the problems portaudit points out can cascade into other ports.

All that that is hand work, which you have to do regularly; and you have to read your email from root. It’s a particular pain that portupgrade will occasionally decide it wants to interact with you personally to configure a package; when that happens it will want you to be on a traditional terminal.

I suspect that the approach outlined here doesn’t upgrade the core utilities should they have security issues uncovered. Similarly freebsd-upgrade is useless if you’ve compiled a custom kernel. I was pleased that I didn’t need to do that this time.

I suspect that I haven’t managed to fall into a “best practice” pattern with all this yet. It appears that there is a lot of variation seen through out the FreeBSD community for how to do this. The various tutorials and handbooks are interesting. Some are out of date. Most are confusing because they are full of too many choices! Some are confusing because the new better way hasn’t quite settle down and attracted a wide following, sometimes there appears to be more than one generation of new better way in that state.

Advice is welcome :).

Good olde unix tool – at

I can’t believe I’ve never thought of this, Ask writes:

… used the at daemon to automatically recover ... as root enter:

at "now + 5 minutes"
service iptables stop

You can type a whole list of commands and when you're done, press ctrl-d to stop.

It will look something like:

# at "now + 5 minutes"
service iptables stop
job 6 at Tue Sep 18 17:53:03 2007
#

I'm particularly surprised I'd not thought of that since  I often use at for watch dog timers, in Rube Goldberg devices that automate workflows, and before I got my Treo I'd do this all the time:

$ at 18:32
mail -s "put money in parking meter!" ask@example.com < /dev/null
^D
job 8 at Tue Sep 18 18:32:00 2007

I used to use it for lots of such pests. send messages like "back out esting patch," "discard experimental foo install,"  "submit rebates,"  "check check cleared," but I do such things with my Treo now.

flashing buffalo router from mac os x

I bought the highly spoken of (scroll down here) Buffalo WHR-G125, which is cheap and flashed it’s software so I can use dd-wrt. I did this from a mac following the directions, but in the end I had a tough time getting the timing right. So this posting’s purpose is to explain how to get the timing right.

I used two tricks. The first is to use expect to script tftp. The second is ignore the instructions (which say to launch the upload when the lights on the router indicate that the single LAN Ethernet your using to connect to it is now active). Instead I wait until the Mac’s brings up it’s interface.

My expect script looked like this:

#!/usr/bin/expect -f
set  timeout 3000
spawn “tftp”
expect “tftp> “; send “binary\n”;
expect “tftp> “; send “rexmt 1\n”;
expect “tftp> “; send “connect 192.168.11.1\n”;
expect “tftp> “; send “put dd-wrt.v24_std_whr-g125.bin\n”;
expect “tftp> “; send “quit\n”;

You use this command to run that.

 % expect flash_it  

That’s pretty straight forward and expect is installed on the mac if you have unix tools, which your likely to if your flashing routers in your spare time. You invoke it like so:

It’s easy to see when the Mac bring up the network connection to the router. When you set things up following the instructions you configured the your network to talk to the router. Open that system preferences up again and stare at the TCP/IP page. You can monitor that page to see the connection to the router come up; first the Ethernet is sensed and a moment latter the IP address is configured; it’s at that point you hit return on the command above.

HamachiX & balance

HamachiX is a Mac OS X application for casually creating “virtual” private networks connecting random computers. It’s implemented by wrapping some user interface around the no charge variant of the proprietary Hamachi VPN product. The VPN(s) it creates are named. The machines that join that network providing an appropriate password and get IP addresses like 5.85.1.2. At which point they may exchange data packets with each other. Hamaci is clever in that it uses p2p tricks to bust thru firewalls.

A typical application is to create a community of users who share iTunes collections, or printers, or whatever.

I’m using this to create http listeners on machines which sit on the public network that then forward to listeners on my laptop. When ever my laptop manages to get on the network Hamanchi rejoins the appropriate network and the forwards start working again. This allows me to demo things running my laptop to random folks on the network at large. I do the port forwarding with balance (sudo port install balance).

It looked like I could do something similar with tinc and avoid the issues raised by using proprietary software. But, MacPorts doesn’t include tinc and this certainly was easy. There are lots of choices for how to forward the listener, as I am with balance. I’d be curious to hear of how other people do this kind of thing?

Update: This widget is an alternative to HamachiX once you get started. You can do all this on the command line, say on 10.3.9.

Update 2: Some people find that HamachiX goes crazy and consumes vast amounts of memory.  A problem you can work around by using it as a easier way to install things and then use the widget or the command line tool hamachi from there on in.  The command line tool gives you a better model of what’s really going on.
I’m finding it dependable for simple tcp/ip connections; but mDNS is spotty.  The forums are full of complaints about various non-working scenarios and some of those are real v.s. user confusion.  One of the foundation pieces, tuntap, is also known to occationally misbehave on intel macs.

Payments? Check

Amazon appears to be the firm doing the most sophisticated job of engineering an instance of the new species of operating system. They just released the API for the payment component. This component weighs in at 260 pages.

Some of what they have put forward, e.g. the historical pricing or the access to Alexa data, are perfect examples of how these OS will leverage getting close to unique resources that the hub vendor has aggregated – i.e. these are vertical in the sense that they leverage unique supply side advantages.  Others like the storage and compute offerings are perfectly horizontal.  The payment’s offering, while principally horizontal, a bit of both.

Clearly some of these are more strategic than others.  I’d love to see their road map and to understand better the cross API synergy and lock-in.  I presume there are people at eBay/paypal, Google, Microsoft, and Yahoo thinking a lot about those.  If I were them I’d hope Walmart buys Amazon.
It is looking risky to be a hub vendor who just sell bandwidth, hosting, payments, what ever.  It is interesting how quickly these hubs are threatening each other’s survival.

Expect

I recall reading a paper, probably a tech report, from the Rand corporation in the mid 1970s about a little AI program they had written which would watch a user interact with a time sharing system and then attempt to extract a script to automate that interaction.  Later in that decade I used Interlisp who Read-Eval-Print command loop, or repl, included a feature known as DWIM, or do what I mean.  DWIM was yet another primitive AI, it would look over your shoulder as you worked and try to help.  It was amusing, though in the end there was a consensus reached that it was more party trick than useful.

A while later, on unix, a serious problem emerged.  A delightful game, Rogue, appeared which we all played far too much.  When Rogue would fire up it would randomly setup a game for you to play, and some of these were better than others.  This gave rise to a serious need; i.e. automation to find good games.  So people wrote programs to do that.

When Mac came out countless hours were wasted complaining about how it lacked a command line.  Interestingly one of the things it included, right from the start, was a record/playback mechanism.  Developers used this automate testing.  (I should note here that the Mac had 128K bytes of ram.)

All these systems are a work around for a general problem.  Given a interface designed to target human users what can we do to bring computers into that interface.  We have this problem writ large these days, since most of the content on the web is targeted at humans it is continually frustrating how hard it is to get the websites to interact with the computers.

It’s a “who do you love?” question.  Unsurprisingly most website designers love the human users.  They labor to serve them well.  That crowds out efforts to serve the computers well; which has a perverse side effect of making it hard for the computers to help the humans.  This in turn, of course, plays into issues like RSS, XML, RDF, Rest, etc.

Interfaces designed for humans are, unsurprisingly, different from those designed for computers.  A good list of what separates these two would be a extremely useful!  For example human interface is likely to be more visual, more asynchronist, more multi-threaded, more decorative, more commercials.  That list would be useful because we build lots of tools to bridge the difference.  Web spiders are one example.  Screen scrapers are another.  Automated testing tools are a third.

All this was triggered by my delight at discovering that my Mac, which has the unix tool set installed, has bundled in a program called ‘expect’.  Expect is a tool for just this kind of bridging.  It is the direct decedent of the tool written to get you a good game of Rogue; in fact it’s examples include a script to do just that.  Expect is designed for writing scripts to manipulate command line interfaces which were designed for humans.  The examples include all kinds of slightly perverse things.  Editing a config files on N machines simultaneously, for example.  It’s a hoot.

It seems to me that there are powerful reasons why the dynamic that leads to tools like these spans so many decades.  For lots of reasons implementers love humans more than computers.  In some cases implementors hate the computers, while wanting to reach the humans.  Because of this the human facing APIs will always be richer than the computer facing ones; and we will forever be writing tools to bridge the gap.