Into the Woods

A few people recommended  this long talk by Van Jacobson, one of the many fathers of the Internet where in he argues for the need with a break with the past, something new in network architecture.  What he is saying here has some overlap with stuff I’ve been interested in, i.e. push.

He argues that we have settled into usage patterns that are at odds with what TCP/IP was designed for.  This is obvious, of course.  TCP/IP was designed for long lived connects between peers; but what we use it for today is very short connections were one side says “yeah?” and the other side replies with, say, the front page of the New York Times.  I.e. we use it to distribute content.

And so he argues for a new architecture.  Something like a glorious p2p proxy server system.  You might say “yeah?” unto your local area and then one or more agents in your local area would reply with, say, the front page of the New York Times.

The talk is a bit over an hour and fun to listen to.  There is much to chew on, and like he says, it’s a hard talk to give.  In a sense he’s trying to tempt his listeners into heading out into a wilderness.  I’m not sure on the one hand he appreciates how much activity is already out there, in that wilderness.  On the other hand switching to a system like this requires getting servers to sign a significant portion of their content to guard against untrusted intermediaries.  There are reasons why that hasn’t happened.  That he never mentions push bothers me.  He points to a few systems that he finds interesting in this space, but I don’t think the ones he mentions are particularly interesting systems.

These are provocative ideas.  Very analogous to the ideas found in the  ping hub discussions and the peer to peer discussions.  It would be fun to try and build a heuristic prefeching/pushing privacy respecting http proxy server swarm along these lines.  No doubt somebody already has.

6 thoughts on “Into the Woods

  1. Edward Vielmetti

    The biggest challenge with systems like this is cache consistency; you are trying to make an argument that (for the New York Times) it is better to publish the New York Times through the equivalent of hosting it at a roadside news stand server that is periodically refreshed with current content.

    That’s plausible for relatively stable content, but implausible for content where the time to live is measured in minutes or seconds; you’d have to update the stuff every time someone writes a comment on a newspaper web site or a correction is made to a story. And there’s nothing, ever, worse than publishing a correction and then wondering whether all of your downstream replicant sites have accurately obtained an updated copy.

  2. Ben Hyde

    Ed – Certainly that makes the design more interesting. But even if everything had short times to live value would emerge from shifting the content distribution deeper into the network architecture.

    In a sense the question generally is that given we know something now about how the net is used what should we move deeper into the layering.

  3. Ben Hyde

    or to put it another way failing to respect short times to live is just another failing in the middleman, any system like this has to architect in protections against the entire range of agency problems

  4. Edward Vielmetti

    I can think of a couple of systems that do caching, widespread on the Internet, to give some framing for this.

    1. DNS caches aggressively, built into the protocol.

    2. Content distribution networks cache, the Akamai’s of this world. Usually there are contracts involved that ensure performance at some level, not always.

    3. user-side proxy caching servers like Squid cache at the client side.

    there’s some multidimensional problem space that each of these solutions work pretty well for, depending on your relative preferences for latency, bandwidth, and cost. better, faster, cheaper pick any two.

Leave a Reply

Your email address will not be published. Required fields are marked *