Exchange networks are full of trust and reliability issues. How do you know that the intermediaries aren’t modifying the content? How do you know that swarm members in a p2p distribution scheme aren’t freeloading?
You can tackle the freeloading problem with a reputation scheme. The participants in the swarm can report on how helpful their peers are. The system can create statistics on those reports. The statistic for a given peer become his reputation. In networks where peers have durable identities and participate over long periods this is probably sufficient to reduce freeloading to reasonable levels. Though you would still need generous participants to enable new participants to enter the swarms.
That participants can collaborate to keep an eye on each other. These models are statistical models of the behavior reports. Successful collaborative efforts demand a lot forgiveness and generosity, so for many measures it’s not a problem that the models are only estimates. What you sum up into the participants reputations will define what the participants aspire to. For example if you model how long a participant stays in the swarm and you can create incentives to stay longer.
The standard solution to middleman problems is to insist on validation end-to-end. That’s one reason the .torrent files in BitTorrent have checksums in them. An intermediary can’t modify the distributed file without the end user noticing.
I’m interested in how to use swarming approaches on streaming data; for example video or audio broadcast, blog pings, weather or other sensor networks. Harder to get an end to end solution for these streams. The broadcaster could sign every packet, I guess, but that would put significantly add to the costs.
Another possible approach is to assure that the checksums and the packets they validate travel by different and hard to predict routes. Participants can report on their peers adherence to a rule that checksum’s should never travel with their associated packets.