jm + consistent-hashing   4

consistent hashing with bounded loads
'an algorithm that combined consistent hashing with an upper limit on any one server’s load, relative to the average load of the whole pool.'

Lovely blog post from Vimeo's eng blog on a new variation on consistent hashing -- incorporating a concept of overload-avoidance -- and adding it to HAProxy and using it in production in Vimeo. All sounds pretty nifty! (via Toby DiPasquale)
via:codeslinger  algorithms  networking  performance  haproxy  consistent-hashing  load-balancing  lbs  vimeo  overload  load 
5 weeks ago by jm
Rendezvous hashing - Wikipedia, the free encyclopedia

Rendezvous or Highest Random Weight (HRW) hashing[1][2] is an algorithm that allows clients to achieve distributed agreement on a set of k options out of a possible set of n options. A typical application is when clients need to agree on which sites (or proxies) objects are to assigned to. When k is 1, it subsumes the goals of consistent hashing, using an entirely different method.
hrw  hashing  hashes  consistent-hashing  rendezvous-hashing  algorithms  discovery  distributed-computing 
april 2016 by jm
Explanation of the Jump Consistent Hash algorithm
I blogged about the amazing stateless Jump Consistent Hash algorithm last year, but this is a good walkthrough of how it works.

Apparently one author, Eric Veach, is legendary -- https://news.ycombinator.com/item?id=9209891 : "Eric Veach is huge in the computer graphics world for laying a ton of the foundations of modern physically based rendering in his PhD thesis [1]. He then went on to work for Pixar and did a ton of work on Renderman (for which he recently got an Academy Award), and then in the early 2000ish left Pixar to go work for Google, where he was the lead on developing AdWords [2]. In short, he's had quite a career, and seeing a new paper from him is always interesting."
eric-veach  consistent-hashing  algorithms  google  adwords  renderman  pixar  history  coding  c  c++ 
march 2015 by jm
Jump Consistent Hash: A Fast, Minimal Memory, Consistent Hash Algorithm
'a fast, minimal memory, consistent hash algorithm that can be expressed in about 5 lines of code. In comparison to the algorithm of Karger et al., jump consistent hash requires no storage, is faster, and does a better job of evenly dividing the key space among the buckets and of evenly dividing the workload when the number of buckets changes. Its main limitation is that the buckets must be numbered sequentially, which makes it more suitable for data storage applications than for distributed web caching.'

Implemented in Guava. This is also noteworthy:

'Google has not applied for patent protection for this algorithm, and, as of this writing, has no plans to. Rather, it wishes to contribute this algorithm to the community.'
hashing  consistent-hashing  google  guava  memory  algorithms  sharding 
june 2014 by jm

Copy this bookmark:



description:


tags: