Director
A director
declaration groups instances of backend
into a list and defines a policy for choosing a member of the list, with the aim of distributing traffic across the backends. This is typically used for load balancing.
Directors vary in syntax depending on their policy. See policy variants.
Like backends, directors can be assigned to req.backend
, and can also be used as a backend in other directors. Directors also have a health status, which is calculated as an aggregation of the health status of their constituent backends. The rules on whether a particular director is healthy or not depend on the configuration of the director and what type of policy the director uses. See quorum and health.
Policy variants
Directors are offered in the following policy variations:
Random
The random
director selects a backend randomly from its members, considering only those which are currently healthy.
Field | Property of | Required | Description |
---|---|---|---|
.retries | Director | No | The number of times the director will try to find a healthy backend or connect to the randomly chosen backend if the first connection attempt fails. If .retries is not specified, then the director will use the number of backend members as the retry limit. |
.quorum | Director | No | The percentage threshold that must be reached by the cumulative .weight of all healthy backends in order for the director to be deemed healthy. By default, the director is healthy if it has at least one healthy member backend. |
.weight | Backend | Yes | Each backend has a .weight attribute that indicates the weighted probability of the director selecting the backend. For example, a backend with weight 2 will be selected twice as often as one with weight 1 . |
In the following example, the random director will choose F_backend1
half the time, and the other two backends 25% of the time each. At minimum, two backends must be healthy for their cumulative weight (~ 66%) to exceed the 50% quorum weight and qualify the director itself as healthy. If only one backend is healthy and the quorum weight is not reached, a 503
error containing "Quorum weight not reached" will be returned to the client if this director is the backend for the request. If the random director fails to connect to the chosen backend, it will retry randomly selecting a backend up to three times before indicating all backends are unhealthy.
director my_dir random { .quorum = 50%; .retries = 3; { .backend = F_backend1; .weight = 2; } { .backend = F_backend2; .weight = 1; } { .backend = F_backend3; .weight = 1; }}
In general, a random director will result in an even and stable traffic distribution.
Fallback
The fallback
director always selects the first healthy backend in its backend list to send requests to. If Fastly fails to establish a connection with the chosen backend, the director will select the next healthy backend in the list.
This type of director is the simplest and has no properties other than the .backend
field for each member backend.
In the following example, the fallback director will send requests to F_backend1
unless its health status is unhealthy. If Fastly is unable to connect to F_backend1
(e.g., a connection timeout is encountered), the director will select the next healthy backend. If all backends in the list are unhealthy or all backends fail to accept connections, a 503
response containing "All backends failed" or "unhealthy response" is returned to the client.
director my_dir fallback { { .backend = F_backend1; } { .backend = F_backend2; } { .backend = F_backend3; }}
In a fallback director, all traffic goes to one constituent backend, so this kind of director is not used as a load balancing mechanism.
Content
The hash
director will select backends based on the cache key of the content being requested.
Field | Property of | Required | Description |
---|---|---|---|
.quorum | Director | No | The percentage threshold that must be reached by the cumulative .weight of all healthy backends in order for the director to be deemed healthy. By default, the director is healthy if it has at least one healthy member backend. |
.weight | Backend | Yes | The weighted probability of the director selecting the backend. For example, a backend with weight 2 will be selected twice as often as one with weight 1 . |
In this example, traffic will be distributed across three backends, and requests with the same properties will always select the same backend, provided it is healthy. It does not matter who is making the request.
director the_hash_dir hash { .quorum=20%; { .backend=F_origin_0; .weight=1; } { .backend=F_origin_1; .weight=1; } { .backend=F_origin_2; .weight=1; }}
A hash
director will not necessarily balance traffic evenly across the member backends. If one object on your website is more popular than others, the backend that the director associates with that object's cache key may receive a disproportionate amount of traffic. The hash
director will prioritize achieving an allocation of keys to each backend that is in proportion to the director's weight
s, rather than maintaining a stable mapping of keys to specific backends.
The same principle can be achieved with a chash
director configured with .key=object
, and this method offers different trade-offs.
Client
A client
director will select a backend based on the identity of the client, expressed by client.identity
, which by default is populated from client.ip
. This is commonly known as 'sticky session load balancing' and often used to lock a user to a nominated backend in order to make use of server-side session state, like a shopping cart.
Field | Property of | Required | Description |
---|---|---|---|
.quorum | Director | No | The percentage threshold that must be reached by the cumulative .weight of all healthy backends in order for the director to be deemed healthy. By default, the director is healthy if it has at least one healthy member backend. |
.weight | Backend | Yes | Each backend has a .weight attribute that indicates the weighted probability of the director selecting the backend. For example, a backend with weight 2 will be selected twice as often as one with weight 1 . |
In this example, traffic will be distributed across three backends based on the identity of the user, which is derived from an application-specific cookie. Regardless of what URL is being requested, requests from the same user will always be sent to the same backend (provided that it remains healthy):
director the_client_dir client { .quorum=20%; { .backend=F_origin_0; .weight=1; } { .backend=F_origin_1; .weight=1; } { .backend=F_origin_2; .weight=1; }}sub vcl_recv { set client.identity = req.http.cookie:user_id; # Or omit this line to use `client.ip` set req.backend = the_client_dir; #FASTLY recv}
A client
director will not necessarily balance traffic evenly across the member backends. If one user's session makes more requests that other users' sessions, the backend that the director associates with that client identity may receive a disproportionate amount of traffic. The client
director will prioritize achieving an allocation of users to each backend that is in proportion to the director's weight
s, rather than maintaining a stable mapping of users to specific backends.
The same principle can be achieved with a chash
director configured with .key=client
, and this method offers different trade-offs.
Consistent hashing
The chash
director will select which backend should receive a request according to a consistent hashing algorithm. Depending on the .key
property in the declaration, the chash
director selects backends either based on the cache key of the content being requested (.key=object
) or based on the identity of the client (.key=client
). The former (object
) is the default when the .key
property is not explicitly specified. Commonly, consistent hashing on the cache key is used to 'shard' a large dataset across multiple backends.
Field | Property of | Required | Description |
---|---|---|---|
.key | Director | No | Either object (select backends based on the cache key of the content being requested) or client (select backends based on the identity of the client, expressed by client.identity , which by default is populated from client.ip ). The default is object . |
.seed | Director | No | A 32-bit number specifying the starting seed for the hash function. The default is 0 . |
.vnodes_per_node | Director | No | How many vnodes to create for each node (backend under the director). The default is 256. There is a limit of 8,388,608 vnodes in total for a chash director. |
.quorum | Director | No | The percentage threshold that must be reached by the cumulative .weight of all healthy backends for the director to be deemed healthy. By default, the director is healthy if it has at least one healthy member backend. |
.id | Backend | Yes | An attribute that is combined with the cache key to calculate the hash. If the ID is changed, reshuffling will occur and objects may shift to a different backend. |
This is the same mechanism Fastly uses in our clustering process to decide which server a cached object resides on in a POP. Consistent hashing means that the assignment of requests to backends will change as little as possible when a backend is added to the pool or removed from the pool. When a member of the pool goes offline, requests that would normally go to that backend will be distributed among the remaining backends. Similarly, when a backend comes back online, it takes over a little bit of every other backend's traffic without disturbing the overall mapping more than necessary.
In this example, traffic will be distributed across three backends. Requests with the same properties will always select the same backend, provided it is healthy.
director the_chash_dir chash { { .backend = s1; .id = "s1"; } { .backend = s2; .id = "s2"; } { .backend = s3; .id = "s3"; }}
A chash
director is similar in effect to a hash
or client
director (depending on the value of the key
property), but while hash
and client
directors will prioritize the even distribution of keys across the constituent backends of the director, a chash
director will prioritize the stable mapping of each individual key to the target backend.
As a result, when backends in a chash
director become unhealthy, there is much less reallocation of keys between the other backends than would be seen in a hash
or client
director. However, a chash
director will also allocate traffic less evenly.
Importance to shielding
Directors are an integral part of Fastly's shielding mechanism, which collects and concentrates traffic from across the Fastly network into a single POP. When you enable shielding for a backend in your service, Fastly will generate a shield director in the declarations space of your VCL, and add shield selection logic into the #FASTLY recv
macro in your vcl_recv
subroutine.
To see the shield director and logic generated by enabling shielding, download the generated VCL for your service.
Quorum and health
In VCL, each backend and director has a health status, which can be either sick or healthy. The status of an individual backend is determined by sending regular health check requests to the backend and testing for an expected response. You can query the health of the currently selected backend using the req.backend.healthy
VCL variable, or the health of a nominated backend with backend.{NAME}.healthy
.
Directors also have a health status but, rather than being the direct result of a health check, it is a derivative of the health status of its member backends. The 'quorum' value allows this to be tuned for some director policies, as described above. This can become complex if a director is a member of another director.
A common use case for quorum is where individual backends in a director cannot individually handle the load of inbound traffic. When enough of the backends in the director are offline that the remainder would be overwhelmed, it can be preferable to consider the entire director unhealthy, as this can help to facilitate switching of traffic to a different director, which has a larger set of healthy backends, or to take the site offline entirely to help the backends recover faster.