Overview of Cache Array Routing Protocol (CARP)

Sharding is a mechanism for distributing data or load in distributed systems design. One described method for sharding or load balancing HTTP requests across proxy servers is “Cache Array Routing Protocol” (CARP).

CARP itself is form of weighted Rendezvous Hashing, (also referred to as highest random weight (HRW)) which (from wiki):

  • allows clients to achieve distributed agreement on a set of k options out of a possible set of n options.

The general algorithm for:

  1. calculating load factors (per server) using relative weights
  2. selection based on uri hash + server hash + load factor

is described in the IETF draft: https://tools.ietf.org/html/draft-vinod-carp-v1-03

CARP’s main draw is it’s support for weights, which in real terms, allows for balancing load with consideration for heterogeneous hardware capacities. For example a data center with servers of varying generations, that can handle respectively varying amounts of load.

Load factor calculation:

Reference: https://tools.ietf.org/html/draft-vinod-carp-v1-03#section-3.3

img

Load factor calculation notes

  • The “normalization” of the weight terms prior to performing the load calculation means the weights are relative to each other. For example 4, 4, 2, 3 == 400, 400, 200, 300
  • The normalized weights are ordered in ascending order by size from smallest normalized value to largest. eg. 0.1, 0.2, 0.2, 0.3... -To a number that sums to 1.0
  • The load factors when multiplied together should be approximately equal to 1, since the load factor calculation is essentially calculating N weighted roots of 1 (a great insight from Marcel Flores!)

Backend selection:

Where “Backend” refers to servers behind the CARP’ing service receiving the proxied traffic.

Described in:

Backend selection is done by combining:

  • URI hash
  • Backend group hash
  • server load factor

The largest value is selected. In psuedo code:

for backend in backend_list:

    hash = carp_combine_hash(uri_hash, backend_hash)
    weighted_hash = backend_weight * (double)hash
    
    if weighted_hash > max_hash:
        backend_selection = backend

Where carp_combine_hash is an XOR plus “large prime multiplication” plus “left shift” to promote diffusion and sparseness of hash function.

A potential 64 bit C++ implementation might look like:

uint64_t carp64_combine_hash(uint64_t URL_Hash, uint64_t Backend_Hash)
{
        uint64_t Combined_Hash = (URL_Hash ^ Backend_Hash);
        Combined_Hash += Combined_Hash * CARP64_END_PRIME;
        Combined_Hash = ROTL64(Combined_Hash, 21);
        return Combined_Hash;
}

Example in Python

I’ve written a basic example demonstrating the mechanism (with no consideration for edge cases or bad user input).

Reference: https://github.com/tinselcity/experiments/blob/master/carp/carp.py

One big caveat to doing this in Python vs C/C++ is taking care to force some of the calculations back into 64bit int sizes with a & 0xffffffffffffffff, since a lot of the original description deals with hashing and bit wise rotations in 32 or 64 bits.

Generated large file of random strings with

#!/bin/bash
# ref: https://stackoverflow.com/a/47502029

openssl rand -hex $(( 1000000 * 32 )) | \
while IFS= read -rn64 -d '' r; do
    echo "$r"
done

Using Backend Configuration:

{ "servers": [
  	{ "name": "server_0001", "weight": 1.0 },
  	{ "name": "server_0002", "weight": 1.0 },
  	{ "name": "server_0003", "weight": 2.0 },
  	{ "name": "server_0004", "weight": 2.5 },
  	{ "name": "server_0005", "weight": 3.0 },
  	{ "name": "server_0006", "weight": 4.0 }
  ]}

Running:

~>./carp.py -c ./carp.conf.json -u ./uris.lst
------------------------------------------------------
    name           weight   load_factor         count   
------------------------------------------------------
server_0001          1.00     0.8736            74408
server_0002          1.00     0.8736            74310
server_0003          2.00     0.9926           147108
server_0004          2.50     1.0399           185118
server_0005          3.00     1.0842           222218
server_0006          4.00     1.1709           296838
  • factors multiplied together are approximately ~1.0 (1.000046764)
  • The effect of the weights and the relative ratios can be seen in the server hit count values. eg server_0001 with a weight of 1.00 receives approximately 1/4th the traffic of server_0006 with a weight of 4.00.

Drawbacks

Scaling with Servers

The CARPing function scales with O(n) where n is the number of backend servers. In a very large data center with hundreds or thousands of servers, this could become a bottleneck.

Popularity

Real internet traffic is not so random. Some URL’s are more popular than others, and the pattern of usage might look like a zipf distribution. Accounting for this might mean load for popular URL’s would have to be spread to other servers. An approach to spreading load for popular URL’s, could be to rotate between the top N highest random weighted servers, as opposed to just the first. This adds complexity however, and state must be shared across CARP’ing servers in order to preserve consensus about which servers popular URL’s would be proxied to.

Summary

In summary CARP is just one of many ways to shard load across servers. It’s an interesting approach to load balancing with accounting for physical server capacity. Thank you to Marcel Flores for his help and intuition with load factor calculations.

References

  1. CARP IETF Draft
  2. Rendezvous Hashing
Written on May 27, 2022