I don't know if I've blogged on this before, but it's been on my mind a while. I've noticed that Bittorrent swarms tend to be unstable.
Bittorrent chops up the data into multiple pieces, and then the various clients in the swarm play mix and match until they have all the pieces they need for a full copy. Then (if they're well behaved) they stick around for a bit longer and give more copies of the pieces to other members of the swarm.
When a client joins the swarm, it gets a mapping of all the pieces in the swarm, and a listing of who has copies of which pieces. It then asks whoever it can for the pieces it still needs to complete its copy.
The problem I've observed is that if one piece has copies in more clients than another piece, then that first piece will tend to be copied to more clients still, simply by being more available, while that rarer piece becomes more relatively rare. This causes data distribution in swarms to grow "clumpy", with the availability gap between the most common and the least common piece growing wider and wider. If the only copies of a piece exist on swarm members which already have full copies, and those members drop out, then the swarm can't form more complete copies until someone with a full copy rejoins the swarm. Meanwhile, the rest of the swarm members copy from each other until they're all not-quite-complete, but don't get any farther than that.
What I'd like to suggest are three changes to the Bittorrent swarm behavior, both on the "server" and "client" side of the connection.
The change to the client is pretty simple; look at the map of available pieces and nodes, and try to grab the rarest ones first. Even if the client is misbehaved and doesn't plan on contributing to the health of the swarm, it still makes sense to grab the rarest pieces before they may disappear.
The first change to the server is a bit trickier, and would require support from the client. If the client asks for piece A, the server should give it piece A--after it gives it piece B, which is currently rare in the swarm, and it knows the client doesn't already have it.
Aside from requiring the client to recognize that the data it was handed wasn't initially the data it asked for, I can see one other problem with the latter approach. If the client is already asking for piece B from another node, then getting a copy of piece B from the node from which it asked for piece A is a waste of time and bandwidth. It could inform the server which pieces it's grabbing, but that would also be loss to efficiency. A compromise might be met where the swarm would only support slipping in pieces from the rarest N% set of pieces, and so the client would only be inclined to report any from that set that it was already grabbing.
Varying the client's pull pattern to either fall within, overlap with, fall near or be distant from that N% segment each have their advantages and disadvantages, but I'm not sure which would outweigh the others. On one hand, avoiding the N% bracket is a destabilizing influence on the swarm, but saves incident overhead. On the other hand, favoring the N% bracket means the client grabs the rarest packets quickly, but increases the incident overhead. Favoring just outside the N% bracket means avoiding the incident overhead, but destabilizes the swarm, and makes the assumption that there are a number of "slipping" seeders.
The second change to the seeder side would be to *proactively* push data to other clients, based on observed connectivity--to some extent, each client is aware of the bandwidth and uptime of other clients. If a client is observed to have a lot of bandwidth available to it, or is observed to be a stable member at a lower level of bandwidth, it makes sense for the swarm to push rarer pieces to places where they can be copied from more quickly. (For the stable, low-bandwidth clients, this had the added benefit of reducing their particular risk of missing out on rare pieces even more.)
Just some thoughts...
--
:wq
Tuesday, March 16, 2010
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment