Welcome to the second and last part of the series Consistent Hashing for Distributed Cache Systems. In our previous article, we first established what a Distributed Cache system is and what we use Distributed hashing for. We also saw a sample usage of distributed hashing. We then saw the problem of rehashing and highlighted the cons of Distributed Hashing approach. In this article, we try to understand consistent hashing and how it can be used to mitigate the problem of rehashing.

Consistent Hashing

Consistent Hashing is a technique that solves the problem of rehashing as it provides a scheme that does not directly depend on the number of Cache servers. In the distribution described in part 1,  you will see that our formula is dependent on the number of servers N . For every hash, we take it’s modulo by N . Consistent Hashing is independent of N .
Consistent Hashing works by mapping all the servers and keys to a point on Unit Circle or Hash Ring . This allows the system to scale without any effect on the overall distribution.

First, we suppose  we have a hash function whose domain is int32  (0 to 2^32 -1).   Then this range ( 0 – 2^32-1) is mapped onto a unit circle. It means every value in this range is mapped to a point on the hash ring or the unit circle. All the servers are hashed using the hash function and are considered to be placed on the edge of the circle corresponding to the point that hash is mapped to. Same is done with every key. To find out the server on which a particular key will be stored or fetched from, we first find the hash of the key, locate it on the unit circle, and then go clockwise until we find a server. The first server we find is the answer.

 

As you can see in this diagram, all the keys and servers have been hashed and mapped onto the unit circle or hash ring . As per our explanation, K1 and K2 will be stored on S1 , K3 and K4 on S2 , K5 and K6 on S3 and K7 and K8 on S0 .

Now let us see what happened when a server is removed. Let us suppose S1 is removed, then for the key, K1 we search in a clockwise direction and get the next server S2. Same happens for K2 . All the other keys remain unchanged. This was the problem we were trying to solve in the first place, and we were able to do so. For any server removed or added, only K/N keys are reshuffled. Perfect, Isn’t it?

Well no, in this example case, we just had 2 keys mapped to S1 and upon its removal, they were moved to S2 . But what if there 200 thousand keys on S1 ? All of them moving to S2 would have choked the capacity of S2 potentially crashing S2 . The Ripple effect would have kicked in and in no time all our servers would have crashed. How to mitigate this??

Well, there is a hack which we use to solve this problem. To even out the load, we create a fixed number of replicas for each cache server (known as virtual nodes). Suppose we have j replicas for every ith node, we will then modify the nomenclature, instead of using Si to denote the ith node, we will use Sij to denote the jth replica of the ith node. Suppose we 4 servers and 10 virtual node , instead of mapping S1 to the circle, we map S10, S11, S12, S13,….S19 ; S20, S21, ….S29 ; S30, S31, S32, ….. S39.
Plese note that we are not adding actual physical machines. All the keys that are mapped on to Sij are internally stored on Si. To find which server a cache is saved to, just find the next virtual node on the circle. Suppose it is Sij, then Si is the answer.

Implementation

The only thing in its implementation to think about is how to get the next bin moving in a clockwise direction? It can be easily done using Binary Search. If we store the hash of all the servers (including virtual nodes) in an array and sort it, the only we thing need to do as soon as we get a key is, hash it and find the upper_bound of the hash in the array. Since the array depicts a ring  of the upper_bound is more than the length of the array, we wrap it around and give the first server (first element of the array) as the answer.
Trust me, just this little trick is the hearth of Consistent Hashing algorithm. Let us look at a mock implementation in Golang. You can find the complete code here.

First and foremost we define the basic functions in an interface.

type Conshash interface {
   AddNode(*Data)
   RemoveNode(*Data)
   GetNode(key string) int
}

There are some interfaces and struct that define our system

type Data interface {
   Data() string
}
type hashFunction interface {
   GetHash([]byte) uint64
}

type ExternalConfig struct {
   HashFn           hashFunction
   RepetitionFactor int
   ServerCount      int
}

type Store struct {
   Config     ExternalConfig
   memberList map[string]*Data
   hashMapper map[uint64]*Data
   hashList []uint64
}

Let us see what these containers (interfaces and structs) do.

  • Data is an interface which allows us to store any kind of as value. You can choose any data type or struct, just implement the Data() function and it is good to be used.
  • hashFunction allows you to pass any hash function of your choice into the system.
  • RepetitionFactor and ServerCount is the number of virtual nodes per server and the number of servers.
  • memberList stores the servers which are added to the system.
  • hashMapper maintains a map of hash and the server. The algorithm gives you the hash of the server on which a key is stored. This map allows you to get the server from the hash.
  • hashList it depicts the Ring and is a sorted array of a hash of all the servers in memberList.

Now let us try to make a new store (distributed caching system).

func NewConshash(data []Data, config ExternalConfig) *Store {
   s := Store{
      Config:     config,
      memberList: make(map[string]*Data),
      hashMapper: make(map[uint64]*Data),
   }
   for _, d := range data {
      s.AddNode(d)
   }

   return &s
}

This function just takes the configs and data (array of servers). It initiates a new store and then iterates over data to add a new server. Let us see how the addNode function looks like.

func (s *Store) AddNode(d Data) {
   for i := 0; i < s.Config.RepetitionFactor; i++ {
      key := fmt.Sprintf("%s%d", (d).Data(), i)
      hash := s.Config.HashFn.GetHash([]byte(key))
      s.hashList = append(s.hashList, hash)
      s.hashMapper[hash] = &d
   }
   sort.Slice(s.hashList, func(i int, j int) bool {
      return s.hashList[i] < s.hashList[j]
   })
   s.memberList[(d).Data()] = &d

}

 

We get a server (data),  iterate over it RepetitionFactor times, make a key, hash it and then save the hash into the hashList . We also add the server to the hashMapper list for the reasons explained above. Let us now look at the GetNode function.

unc (s *Store) GetNode(key string) *Data {
   searchFn := func(i int) bool {
      hash := s.Config.HashFn.GetHash([]byte(key))
      return s.hashList[i] >= hash
   }

   node := sort.Search(len(s.hashList), searchFn)
   if node >= s.Config.ServerCount {
      node = 0
   }
   partition := s.hashMapper[s.hashList[node]]
   server := s.memberList[(*partition).Data()]
   return server
}

This function does nothing but a upper_bound search, wrapping the result around 0 and return the answer. Last, let us look at the driver code for the programme.

type hasher struct{}

func (t *testS) Data() string {
   return string(*t)
}

func (t hasher) GetHash(b []byte) uint64 {
   h := fnv.New64a()
   h.Write(b)
   return h.Sum64()

}
func main() {

   cfg := ExternalConfig{
      HashFn:           hasher{},
      RepetitionFactor: 2,
      ServerCount:      2,
   }
   server1 := testS("server3")
   server2 := testS("server4")

   // ds := []testS {server1, server2}
   c := NewConshash(nil, cfg)
   c.AddNode(&server1)
   c.AddNode(&server2)
   adddr := c.GetNode("server4")
   pp := *adddr
   as:= pp.Data()
   fmt.Println("Address of the key is " , as)

}

 

We have made a string implementation here, and it is self-explanatory. We make a store c , add servers to it and then try to predict which server a particular key will be stored to / fetch from. Please note this whole article was about getting to know the server on which a key is stored. It has nothing to do with the value associated with that key.

And this is it. As stated earlier, consistent hashing is a very simple but very powerful technique used for distributed cache systems. Many big products like Amazon Dynamo DB uses a more optimized version (called chord) of consistent hashing to provide high availability.
I am myself learning more about distributed computing and the tricks used to solve day to day challenges faced in this domain. Shall keep posting more about them on this blog. In the meanwhile, I hope the reader would have enjoyed this series ‘Consistent Hashing for Distributed Cache Systems’. Please comment any doubts or any new suggestions that you have. Thank you.


0 Comments

Leave a Reply

Avatar placeholder

Your email address will not be published. Required fields are marked *

Wordpress Social Share Plugin powered by Ultimatelysocial
error

Enjoy this blog? Please spread the word :)