Revealing, Authorities, Identity

Open systems thrive when contributors are encouraged to toss a little value into the pot. Stone soup and all that. Of course we all know that there are bad actors out there so if you run an open server your going to get evil contributions from bad actors. Open up your email address, your cell phone, your cooperative wiki, your blog comments and some twit is going to show up sooner or later posting porn, selling unregulated herbal remedies, and spraying rude graffiti on the walls.

Three solutions:

  • Authorize only trusted contributors.
  • Moderate contributions via some trusted mechanism.
  • Maintain, i.e. trusted folks fix the damage after the fact.

So it’s all about trust, authentication, etc. etc.

Problem is that we only know how to solve that problem in one way. The contributor must reveal something to us, so we can authenticate him, and we need to have some central authority that vouches for the guy.

Those two terms are trouble:

  • Reveal
  • Central Authority

Those two goes a long way toward explaining why the Internet identity problem is so hard. On the one hand the solution needs to enable revealing (which sounds a lot like privacy intrusion, embarrassment, and identity theft) and on the other it needs to enable the emergence of central authorities (which sounds a lot like authoritarian police state, abusive monopoly, and single point of failure).

This stuff just doesn’t conform well to the end-to-end principle. Sure, sure, you can run your own “authority” out at the edge. Your blog, wiki, email client, can sit there infer the trustworthiness of your contributors from various implicit and explicit info. That’s all well and good but it’s socially dysfunctional.

People are social creatures. They project a personality. They manifest assorted behaviors and attributes so that we can construct a model of them. A world where everyone is totally anonymous is just bizarre.

Worse is that solving this problem at the edges means that trust and reputation isn’t fungible. Not fungible means lock-in and hence increased power-law reinforcement. Not good. Why spend time contributing to dinky open source project when you could spend the time contributing to famous project and hence gain a reputation that’s “worth something?” Let’s say it takes 10 good postings at Bob’s community before I’m allowed to post without moderation and join in the discussion. Once I’ve climbed over that barrier why would I bother to go join Sam’s community?

This doesn’t make for an open system, this makes for a compartmentalized system.

Now I’m not arguing that reputation can be fungible like ounces of gold; but I am arguing that a design that declares that reputation shouldn’t be fungible from the get go is wrong.

So this is the rub. We have a problem that demands that we figure out how to design the central authorities in a manner that avoids them becoming too powerful – we know a lot about that. We have a problem that demands that we empower the users to reveal what they wish to reveal.

It’s just blind foolishness that pretends we can design a system with no authorities and no revealing. Worse is to believe that systems of that kind encourage a more open system. Open system thrive on having complex porus membranes. No membrane is fatal.

0 thoughts on “Revealing, Authorities, Identity

  1. Lucas

    Your thoughts are the closest to mine I have found. Basically I think the data itself needs to be decentralized, and meta-data centralized. So the xml file on our blog server contains opinions, articles, etc. but the emergent reputation, allowance onto sites, recommendations etc, all data about the data (data derived from aggregation or specific to sites) has to be centralized. So the ecology contains 3 things: personal agents (localized xml and a personalized api for interacting with the data), harvesters (aggregators), and websites.

Leave a Reply

Your email address will not be published. Required fields are marked *