I’ve been toying with ideas around an update notification network that would:
- take notifications of updates from readers and writers and deliver them to readers,
- reduce the cost of polling for readers and writers,
- shorten the time it takes for updates to get to the readers, and
- avoid concentrating power in hubs while leveraging and giving value to the masses
Two corners of this design space I’ve been kicking around in recently.
- How to maintain privacy for readers.
- How to deal with bad actors.
Bloom filters might provide a scheme to help deal with the privacy problem; but how they interact with the class issues is hard to think about. Each member of the network would then publish the bloom filter bit map of his interests and his peers would notify him when they saw updates that matched that filter. Notification rattle around in the network only among those peers that who’s filters intersect for that notification’s bit settings. A peer in the network can set additional bits either to protect his privacy, but also because those bits reflect the interests of his peers. A voracious reader, like the elite readers, can fill in all the bits. Notifications might not even mention the resource that was polled, instead they just enumerate the bits effected.
A cool insight about the bad actors came out of a chat with Ben Laurie. It’s not really the quality of the participants you care about. You care about the quality of the notifications. That’s the same problem we find in open source projects – where we take all input and then strive to let thru the good stuff. In this case you could treat notifications injected into the network as rumors that something might be new and then let some process in the network promote that into an assertion that something’s changed. For example you might put a quality value into the notification; decaying that quality as it cascades thru the net; amplifying that quality when ever a peer acts upon the information and asserts that it’s correct. That could have nice synergies with the need for a mechanism that assures participants in the network that the resources they care about are being regularly monitored; i.e. the watch dog problem.
I’m not really satisfied with the bloom filter approach to the anonymity, it looks like it has problems of many kinds – but it’s a fun idea anyway. Similarly I doubt that the notification QA approach is robust against really committed bad actors.
All this is an interesting window into the class problems. The elite often solve the anonymity and bad actor problems using business agreements, lawyers and the courts. That allows them to punt on designing the protocols to solve them. That tends to force the masses to go thru bottlenecks created by the elite. The function of that bottleneck is to have the elite dude vouch for their not being a bad actor while laundering their traffic (hopefully) to given them some anonymity.
NTP has a good model of reliabity, with ‘strata’ for stating your distance from the top, and the ability to ping a node to see its state. To deal with trust you get data from multiple sources -confirmation. Nelson Minar did a cute little paper on the resulting system’s accuracy problem a while back…
That paper’s here: “Synchronizing clocks is an important and difficult problem in distributed systems.” :-).