You have a line segment (that's the key space). Breaking it to N equal parts (the buckets) means it is very easy (O(1)) to compute which part contains a given point. But if you increase or decrease N, all parts change (and shrink or grow respectively). Only the first and the last part have one of its ends unaffected, but the rest have both their ends moved.
Now consider breaking it in N parts, but not of necessarily the same length. Now it is easier to change the number of parts. Do you want more parts? Break one in two pieces. Want fewer parts? Merge two parts sitting next to each other. All other parts remain firm in their place, unaffected. But now finding which part contains a given point gets more complicated (O(logN) if you keep the parts sorted, say in a red-black tree).
Hashtables themselves can be summarized in an interesting way: it is sorting...by hash (it doesn't matter if the original key space is not totally ordered - the hashes are). But if it is sorting, where the logN factor in the complexity of hashtable operations? Well, there isn't one: hashes are small natural numbers (bounded by the hashtable size, which is O(N) of the elements in it). And what do you think of when you want to sort small natural numbers? Counting sort. (Caveat: hash functions do not just aim to map a large key space into a sequence of natural numbers, that's only part of the story. They also aim to approximate a random permutation of the original space, which is separately useful - to try to load balance the buckets when the incoming distribution is not uniform. In consistent hashing, we sacrifice the first role of hash functions, but we keep this latter one).
Trying to compress the discussion above:
- Hashtables are really counting sort
- (Yes, data structures are algorithms)
- An equal piece of the (approximately permuted) key space is mapped to each bucket
- Easy to find the corresponding bucket for a key (++)
- Changing the number of buckets affects the mapped spaces of all buckets (--)
- Consistent hashing differs in the following way:
- A variable piece the (approximately permuted) key space is mapped to each bucket
- Harder to find the corresponding bucket for a key (--)
- Easy to split or merge buckets, without affecting any other buckets (++)
Bonus section: Bigtables (and the good old Bigtable paper here). In Bigtable, The rows are primarily sorted by row key. The tablets (think buckets) are variable length, so that overloaded tablets can be split, without touching the huge amount of data that potentially reside in the same bigtable. (One can help load balancing the tablets by -did you guess it?- sorting by hash first...). So, here you go. The rows are sorted. Yet it can still rightfully be described as consistent hashing. Confused yet? Well, at least I tried. :)
No comments:
Post a Comment