I’m finding it very interesting to look at the challenges of creating a reputation system that allows it’s participants to remain anonymous. I think this is key. The right solution to the Internet identity design problem must support keeping the users identity compartmentalized. Only that can maintain privacy. If on the one hand we want to have communications that are more usefully tied to an actor’s reputation while on the other hand we to keep that actor’s total identity fragmented then we must find a way for him to maintain a number of persona in the net. The basic persona he adopts should be quite private, quite anonymous.
Consider as a benchmark the spam problem. This is the problem of guarding open systems from bad actions and bad actors. This problem arises in both open comment systems (i.e. blog comments), open web site editing systems (i.e. wiki’s), open messaging systems (i.e. internet email), and of course open source, and open science.
All the solutions focus on sorting. Sorting actions into good ones and bad ones. Sorting actors into good ones and bad ones.
Lots of tricks exist for sorting the actions. For example filtering out postings with bad words; or links to bad sites. For example training statistical recognizers to let good things thru and shuttle bad looking things off for further analysis or disposal. Having a moderator or editor that passes judgment on the individual actions.
The bad actor mechanisms work by building a model of various actors. Then when sorting actions we inform that by the reputation of the actors involved. “Oh look it’s the 10th posting from the same IP address in 30 seconds.” You might glance at the sender of an email message and say “Oh, Bob. He’s a good egg.” or “Ah email from apache.org, they’re cool.”
By design, for privacy reasons, most internet protocols make mapping from actions back to actors is very sloppy. It wouldn’t be hard, technically, to fix this. For example sender could sign every message using a private key. Then recipients could, with the help of some directory services, map that back to the sender and from there to any number of services that could vouch for his reputation.
This hasn’t happened both because shifting the installed base to some standard solution would be hard; but more so because the this would assure the total collapse of any privacy for senders. It would make every message they send part of their record. That this record is highly distributed today is small comfort. It would enable big brother.
Any system that is going to be popular with real people for casual usage needs to allow for anonymous senders. And it’s not just the senders who desire this. If I’m running any one of the many kinds of open systems enumerated at the head of this message I don’t wish to demand full disclosure by my contributors. I only want two things: I want lots of contributions and I want a way to temper the damage done to my systems from bad actors. If I’m running a retail store I don’t want to demand that my visitors reveal their entire persona just to browse my offerings!
Is it possible to have useful actor reputation systems without demanding that the actors give up their privacy? This is a key design problem.
It appears that the answer is yes. Consider as an example. Let’s say I have an excellent reputation in some community. I request that community write me a letter of introduction to the anonymous community. This letter says nothing more than the bearer of this letter is a good guy. I take the note to the anonymous community and they provide me with an reputation/identity that I can use to on anonymous actions. Recipients of those actions can then check that anonymous reputation. If I act badly in that persona then they place bad marks on the anonymous reputation; but it these do not go back to my original reputation – there is no back pointer. The only back pointer available is the link to the original community. I have damaged the reputation of my home community, and only that.
It’s an interesting cryptographic design problem. Could we design a system where sufficiently bad actions on the part of the anonymous actor can be feed back to his original persona but that does not require that we trust the anonymous reputation communities to guard his privacy otherwise.
Interesting stuff. It seems to me that much of what you are saying could be applicable to the internet2’s shibboleth (shib) project. Shib is federated single sign on similar to the liberty alliance, except it tries to keep user id anonymous by passing an opaque token as an identifier. In event of abuse of a site it can block the user identified and report back to their home site who can then convert the opaque identifier to a user id and take whatever action deemed necessary (in the uni context proabably a stern talking to). The thought of some kind of dynamic “reputation” based system would be interesting especially as “legitimate” p2p file transfer (again in the university context) is being contemplated in the lionshare project (http://lionshare.its.psu.edu/main/info/descript).
The use of opaque tokens is a good tool to pull out of the bag of tricks. Liberty, SAML/2, and Shibboleth are all doing a good job pulling good tricks out of the various bags to try and tackle these problems. What’s hard to see clearly is if they are actually succeeding in solving the problems. They do appear to be one of our best hopes.
Very interesting post! But why should people be wary of providing recommendations unless it in some way impacts them? Why should a recommendation be trusted if it does not have a cost?
In the real world recommendations can be believed since if the person being recommended screws up then it reflects negatively on the recommender. Hence we are careful and recommendations are trustworthy to an extent.
But in this online model, reputations can be built and destroyed on a daily basis with nobody feeling the pinch but the person who’s building and destroying his own reputation – an attractive proposition to spamsters?
I think an implementation of your proposal which has some cost to the recommenders would be trustworthy, that is, a model that keeps track of how trustworthy a community is (maybe a count of how few of its recommendations have destroyed their reputation?).
when I dreamt of writing a P2P thingy that crossed Frontier’s data model with POE’s execution model (no, seriously ;-), the best approximation of real life trust networks seemed to be a bidirectional certificate model allowing the resolution of data represented by an opaque token — e.g. the identity represented by an arbitrary username — to take place at any node which knows, or has access to, the content of the token, based on whether the party requesting the resolution is sufficiently “trusted” by the party which has access to the token’s backing data.
this seems like it could apply to the scenario you describe, because the opaque data can just as easily represent reputation as anything else, and the entities in the chain can be collective or singular. i.e. when the home community gets a message from the foreign community saying “your entity #12345 misbehaved,” it can weigh its relationship with #12345 against its relationship with the foreign community, and act accordingly.
just like real life, any time a party shares (e.g.) its identity with another party it runs the risk of that party sharing it with another party. but an arbitrary inquiry can’t resolve the identity associated with a particular token unless it first establishes a trust relationship with at least one of the parties which already knows that identity. meanwhile, #12345 retains the option of never sharing its “identity” data with any other nodes, which forces any party wishing to resolve that token to contact #12345 directly.
I don’t know whether anybody’s actually using that model…
Pingback: Many-to-Many
Pingback: we make money not art
interesting stuff. this is directly connected to my idea that if an author wants his work to spread and proliferate, he must remain anonymous. check out the blog I’m trying this out on, as a game. the mystery is an adhesive.