I’ve greatly enjoyed much of Richard Gabriel’s writing over the years. Though I’ll admit I haven’t read anything he’s done in the past few year. In any case I happened to I listened to this interview he gave at OOPSLA to Software Engineering Radio. The interviewer wanted to learn about this thing, Lisp, and he asks a series of questions to dig into the matter. While for me this was pretty dull Richard does tell retell a story I’d not heard in recent years. That got me to thinking about a model of how ideas used to flow from the academic research labs into programming community at large; and in particular how the Lisp community didn’t use standards in quite the same way as other language communities.
Lisp is a great foundation for programming language research. It is not just easy to create new programming frameworks in Lisp. The pie chart of where you spend time building systems has a slice for framework architecting and engineering. Lisp programmers spend a huge portion of thier time in that slice compared to folks working in other languages. In Lisp this process is language design, where as in other languages it’s forced into libaries. There is a tendency in other languages for the libraries to be high cost, which makes them more naturally suited for a standardization gauntlet. In Lisp it’s trivial to create new frameworks and they are less likely to suffer the cost and benefits of becoming standardized.
You get a lot more short term benefit in Lisp, and you pay latter as sweet frameworks fail to survive. They don’t achieve some level sustianance because they don’t garner a community of users too look after them.
Back in the day this was less of a problem. And thereby hangs the tail that Richard casually mentioned. He was sketching out how a pattern that was common during the AI’s early golden age. Graduate students would aim high, as is their job, and attempt to create a peice of software that would simulate some aspect of intelegence – vision, speech, learning, walking, etc. etc. – what aspect doesn’t really matter. In service of this they would create a fresh programming language that manifested their hypothisis about how the behavior in question could be manifested. This was extremely risky work with a very low chance of success. It’s taken more then fifty years to begin to get traction in all those problems, and back in the day computers were – ah – smaller.
Enticing graduate students into taking huge risks is good, but if you punish them for failing then pretty soon they stop showing up at your door. So you want to find an escape route. In the story that Richard sites, and which I’d heard before, the solution was to give them a degree for the framework.
Which was great. At least for me. All thru that era I used to entertain myself by reading these doctorial thesis outlining one clever programming framework after another.
What’s facinating is that each of those acted as a substitute for a more formal kind of library standardization. They filled a role in the Lisp community that standardized libraries played today in more mainstream programming communities. This worked in part because individual developers could implement these frameworks, in part or if they were in the mood in their entirety, surprisingly quickly. These AI languages provided a set of what we might call programming patterns today. Each doctoral thesis sketched out huge amount of detail, but each instance of the ideas found there tended to diverge under the adaptive presure of that developer unique problem.
So while a doctoral thesis isn’t a standards specification it can act, like margarine for butter, as a substitute. Particularly if the consumers can stomach it. Lisp programmers like to eat whole frameworks.