Category Archives: power-laws and networks

Graphical Programming and yEd

Graphical programming languages are like red sports cars.  They have lots of curb appeal, but they are rarely safe and reliable.

I long worked for a company whose product featured a very rich graphic programming. It allowed an extremely effective sales process.  The salesman would visit the customer who would sketch a picture of his problem on the whiteboard, and the salesman would enquire about how bad things would get if the problem didn’t get solved.

Meanwhile in the corner the sales engineer would copy the drawing into his notebook.  That night he would create an app in our product who’s front page looked as much like that drawing as possible.  It didn’t really matter if it did anything, but it usually did a little simulation and some icons would animate and some charts’ would scroll.  The customers would be very excited by these little demos.

I consider those last two paragraphs a delightful bit of sardonic humor.  But such products do sell well.   Customers like how pretty they look.  Sales likes them.  Engineering gets to have mixed feelings.  The maintenance contracts can be lucrative.  Thathelps with buisness model volatility.  So yeah, there is plenty of value in graphical programming.

So one of the lightning talks at ILC 2014 caught my attention.  The speaker, Paul Tarvydas, mentioned in passing that he had a little hack based on a free drawing application called yEd.  That evening I wrote a similar little hack.

Using yEd you can make an illustrations, like this one showing the software release process for most startups.

My few lines of code will extract the topology from the drawing, at which point you can build whatever strikes your fancy: code, ontologies, data structures.  (Have I mentioned how much fun it is to use Optima to digest into a glob of XML?  Why yes I have.)

I was also provoked by Fare Rideaus‘ talk.  Fare is evangelizing the idea that we ought to start using Lisp for scripting.   He has a package, cl-launch, intended to support this.  Here’s an example script.   Let’s dump the edges in that drawing:

bash-3.2$ ./topology.sh abc.graphml
Alpha -> Beta
Beta -> Cancel
Beta -> Beta
Beta -> Beta
bash-3.2$

I’ve noticed, dear Reader, that you are very observant.  It’s one of the things I admire about you.  So you wondering: “Yeah Ben, you found too many edges!”   Well, I warned you that these sports cars are rarely safe.  Didn’t I?

Welfare Economics

MBA types like to talk about “your business model,” and less so they like to talk about “their business model.”   I like to ask about the model’s effect on the wealth distribution.   It’s a hard question, but generally few businesses actually shift wealth and income in what I’d see as the desirable directions.

With that said here’s a cute B-School chart:

For my purposes think of these two technologies as two business models; i.e. ways of organising the world to create goods for sale to the public.  And for my purposes we can think of the two axis as being rich and poor.   It helps illustrate how the technology has consequences.

That drawing is taken from an interesting post by Steve Randy Waldman, who’s coming at the question I’m interested in from what might be a quite productive angle.  But one way or another this kind of modeling helps to illuminate what I mean when I try to highlight how your business model, standard, technology, ontology, etc. shape in interesting and oft powerful ways the resulting distribution.

Continuous v.s. Batch: The Census

Log, from Blamo: Civil War Reenactor

Log, from Blamo: Civil War Reenactor

I am enjoying this extremely long blog post about how logs can form the hub for a distributed system, by Jay Kreps from Linked-in.  It’s TLMR “too long, must read?”  It reminds me of my post about listening to the system, but more so.

He has a wonderful example of batch v.s. continuous processing.  A dialectic worthy of its own post at some point.

The US census provides a good example of batch data collection. The census periodically kicks off and does a brute force discovery and enumeration of US citizens by having people walking around door-to-door. This made a lot of sense in 1790 when the census was first begun. Data collection at the time was inherently batch oriented, it involved riding around on horseback and writing down records on paper, then transporting this batch of records to a central location where humans added up all the counts. These days, when you describe the census process one immediately wonders why we don’t keep a journal of births and deaths and produce population counts either continuously or with whatever granularity is needed.

Cute.  My goto example has always been the difference between the annual cycle(s) that arises from agriculture and tax law revisions v.s. the newspaper’s daily cycle in service of the demand for fish wrapping.

jobscalculatedriskBut of course that’s not really continuous, it’s just batch with different cycle times.  And yet I once encountered a continuous system that involved a pipeline across a desert.  Each time the sun would emerge from behind the clouds the pipe would warm up and a vast slug of material would be ejected out the far end into a hastily build holding pit at the refinery.  Maybe slug processing would be a good fall back term for the inevitable emergence of  batches in continuous systems.  Blame the clouds.

 

Smeed’s Law

In the 1930s a traffic engineer in England noticed a curious pattern in the data about highway deaths.  Here is the chart from the article he published.

The vertical axis shows deaths/car and the horizontal shows cars/person with one dot for each country.  That’s for 1938.   In 1938 few people in Spain(19) owned a car, but those that did were causing a lot of deaths.   Switzerland(2) wasn’t fitting the model very well.   You can make up your own insta-theory for why countries with few cars/person kill more people with each car.

Here’s a chart from 1980.  More countries, more years, more confirmation of the model.  The data are shown twice, the second time is a log-log graph.

Note that there are lots of things you might think would affect the numbers for a given country.  For example: seat belts, population density, driver median age, safety regulations, insurance, policing, road quality, dash-board cams…  But those aren’t part of this simple curve and so can only effect the residuals.

I stole these charts from J.G.U. Adams short article “Smeed’s Law: some further thoughts” in Traffic Engineering and Control, 1987

I find this all weird.   You would think the traffic engineers would have a polished consensus by now of what this is saying.  Adams’ article has some interesting things to say.  For example societies learn to manage the cars as their numbers increase.   But I don’t sense there is a consensus in the profession.  Even now, 80+ years after the pattern was first noticed.

Selling out your Friends

Robert Shiller: “It’s not the financial crisis per se, but the most important problem we are facing now, today, I think, is rising inequality in the United States and elsewhere in the world.”  And he won a Nobel Prize.

I have a theory about this problem.  Think of the set of all the world’s supply chains as a network.  I think we need to grow this graph so it’s a lot more bushy at the low-end.  Shrubbery!   I guess this theory shares a lot with Bill McKibbon’s ideas in Deep Economy; or the Prahalad’s ideas in Fortune at the Bottom of the Pyramid.

‘I don’t keer w’at you do wid me, Brer Fox,’ sezee, ‘so you don’t fling me in dat brier-patch. Roas’ me, Brer Fox,’ sezee, ‘but don’t fling me in dat brier-patch,’ …

I continue to harbor great optimism about the Internet,  It can help us with this.  The Internet has an amazing power to enable communities of common interest to form.  These communities are great of shubbery.  Precursors of commerce?  Maybe.

But, it’s worth chewing on the ideas in “how to lose friends and family via mult-level marketing” a posting that Andrew highlights.  Andrew introduces the idea that MLM schemes provide a way for people to liquidate (e.g. convert to cash) their social networks.  Liquidate is what you get when your done the monetizing a social network.  Lots of people are into that.  Monetize – what a word!  What can’t we monetize, my cat?

So while I love the Internet’s power as a host of community forming I must say I’m taken aback by how rapidly capitalism has evolved businesses models that feed on these tender shrubs.

Ironically my social network got infected by one of these parasites just today.   A friend signed up for Venmo, a p2p payment company, and they posted this exciting fact to Facebook on his behalf.  I admit to an unhealthy curiosity about these emerging currency systems.  For example, I think Bluebird is very interesting.  So I went and signed up for Venmo and installed the app.  A few moments later I was distressed to discover it was scanning the entire address book on my phone, maybe a few thousand entries.  If you want to use thier payment network you have to hand over your contacts.  No way to void it.  So I uninstalled, etc.  Who knows if that helped?

I totally get that building out “the network” is an existential issue for companies like Venmo.  Desperate need is an excuse in a starving man, is it an excuse for a start up?  Not that you need to worry about Venmo.  Venmo got bought, and the buyer then got bought by Paypal.  So they captured and sold a network.  That this is what most internet startups need to do worries me.

Returning to shrubbery as a tool to work inequality problem.  No doubt there are many much more ethical ways to convert the small communities into engines of economic activity.  It would be great to have a list.  No doubt looking at MLM business models would inform that search.

Ray Dolby’s business model

8trackI read Ray Dolby’s obituary in the New York Times because the Dolby noise reduction system is a textbook example of a two sided network business model.  Invented in the early 60′s the system enabled you to get better audio quality out of recorded sound.  It transformed the audio signal to route around flaws in the tape, tape heads, and transport mechanisms.  The problem it solved grew quite severe when cassette tapes became popular.  To get the benefit a lot of parties along the supply chain needed to play along.  Two in particular.  The companies that manufactured cassette players and the companies the manufactured the cassettes containing the entertainment.

DOLBY_Product_header-NOKIA-N8The obituary get’s it wrong.  Dolby’s achievement wasn’t the signal processing algorithms; his achievement was getting all the players to sign onto his system.  Two-sided networks (standards) are all about the difficulty of attracting, coordinating, and locking-in two diffuse groups.  Dolby managed to own a standard.  And so he got to charge a toll for his small part in intermediating between sound producers and consumers.  .  He them managed to steward that role so that even today his company (DLB) stands at the center of the standardization of sound.  Next time your watching a DVD notice how right there in the credits the Dolby name will appear.  Think about how much time and space that credit takes v.s. other contributors.  And today, it’s all digital!

I wonder if any of the New York Time’s obits talk about the deceased’s business model.

Open Reader API

I use Vienna as my RSS feed reading.  The new beta release is interesting.  A while back Vienna added support to pull your feeds from Google Reader.  I never used that, i pull each feed directly from the source.  I didn’t use it for three reasons: 1) while I read blogs on multiple devices I partitioned the blogs per-device; 2) I never got around to it; and 3) I don’t really like the idea of an intermediary knowing all the blogs I read.

The new release has support for pulling feeds from other sources.  And I’m delighted to see that there is some hope that we will see an open standard emerge for the aggregation service’s API; along with open implementations of that.

In turn this would help to allow more privacy around the aggregation service.  That’s a hard problem, but I have a sketch of a possible solution.

Northwestern United States – Earthquake

Once upon a time, or so I am told, geologists believed that, unlike California’s earthquake potential, the Pacific northwest was pretty stable.  Other than a few native american folktails, it’s been quiet since settlers showed up. I’m reading “Cascadia’s Fault: The Earthquake and Tsunami That Could Devastate North America” (library, amazonblog) which explains how they came to change their minds about that.

Now they think that something pretty horrific is in the cards.  If you can sublimate what that means it’s a very cool detective story.  I particularly like that they know exactly when the last monster quake occurred: 9pm on January 27th, 1700.  They know this because of extensive written records of the Tsunami it caused hitting the coast of Japan.  They know this because they found trees still standing in salt marshes, killed when the ground sank and the saltwater killed them.  They pulled the well preserved roots from under the mud and counted the rings.

They have core samples of the off shore mudslides that the monster quakes have created.  Using techniques from the oil industry they can match up the wiggles in the core samples taken from these samples they can puzzle out a history for these monster quakes that goes back a long way.  They can draw a sobering timeline (click to enlarge).

CascadiaTimeline

 

They now the mountain tops are slowly squeezing together.  These days they can watch the mountains of the entire region move, every day.  They move slightly back again.  They can sum up how much stress has accumulated and around 60 feet of slippage will need to be unwound by the nest quake.  The big ones on that timeline are magnitude 9.  No city with sky scrapers has have ever experienced that.  The 2011 Japanese Tsunami was triggered by a one.

 

So, the state of Washington has a brochure, it suggests that most every bridge in the state would collapse.

ps. Mimi and I will be in San Francisco the last weekend of July; for the Renegade Craft fair.

Metering, discriminatory pricing, subscriptions … Adobe.

Pricing is a mess.  On the one hand you can argue that things should cost exactly what they cost to produce (including, of course, a pleasant lifestyle for their producers).  On the other hand you can argue that they should cost exactly whatever value their users extract from the product. Surplus is the term of art.  If you charge less than the value extracted the consumer is left to capture the surplus value.

More than a decade ago I had a bit of fun at the expense of my employeer arguing that we should switch all our pricing to subscription, just as Adobe has just recently decided to.  My suggestion was greeted with an abundance eye rolling and head shaking.

Leaving surplus value on the table can be very risky for the producer.  It’s not just about how pleasant a lifestyle he get’s (aka greed).  Businesses are multi-round games; what you can invest in the next round of the game depends on how much of the surplus value you capture v.s. your competitors.   But also businesses with large market share and large volumes gain scale advantages that drive down costs, establish standards, and generally create positive feedback loops.  (That leads to the perverse tendency for the largest vendor to be the best and the cheapest.)  Which brings us to discriminatory pricing, aka value pricing.

The demand side network effects depend on the scale of your installed base.  Discounting lets you reach users that you wouldn’t otherwise.  If you can segment your market then you can enlarge it.  There is a standard text book illustration for this.

priceing

That chart shows the number of buyers your product will have if you charge various prices, or looking at it another way it’s showing you how much value users think they will get from your product.  If you’d like a lot of users you should charge the green price.  Your total revenue is, of course, the volume of the rectangle.  Why not both?  Why stop there?   As a vendor, what you’d love charge everybody exactly what they are willing to pay.  You could have both the maximum number of users and all the volume (revenue) under that curve.

Subscription pricing gives you a tool, because it lets’ you meter usage, that can stand in as a proxy for the value the users are getting from the product.

I was surprised by Adobe’s subscription pricing, not because it’s expensive and draconian.  No, I was surprised because it appears to have no metering.  My insta-theory for why?  Well I think what we are seeing at this stage is the classic: e.g. “list price.”  That they will start offering various discounted variations on the service.  It would be odd if they don’t.  Because, otherwise, they are leaving two things on the table.  They are shunning a huge pool of users, missing out on all the demand side network effects they create, and encouraging competitors to fill into that abandoned market segment.  And, they are leaving money on the table.

I’ve no idea what they will meter, but I’d be surprised if they don’t.