Pub/Sub at the edge with Fanout
Fanout is a publish/subscribe message broker built into the Fastly platform and designed to power real-time and streaming applications. It lets you deliver live updates—such as chat messages, data feeds, or event notifications—instantly to browsers, mobile apps, servers, and other clients.
Fanout is available to Fastly Compute services. When your Compute service receives a client request, your application can perform a Fanout handoff—an operation that tells Fanout to take over the connection while directing the request to a backend. Based on instructions included in the backend response, Fanout can then keep the connection open and subscribe the connection to one or more channels, which act like named topics for broadcasting data. Your backend can be an external origin or a Fastly service—even the same Compute service.
To send data to clients, you publish a message to a channel using the Fastly Publishing API. Fanout distributes the message to all active connections subscribed to that channel. This allows you to transform a typical HTTP request into an event-driven stream using techniques like Server-Sent Events, Long-Polling, or WebSockets.
Fanout's architecture offers several advantages over proprietary streaming platforms:
- It runs at the Fastly edge, so any request pre-processing (like authentication) applies to your streaming traffic too.
- It works with standard HTTP clients and servers—including serverless functions.
- It allows you to turn any HTTP response into a real-time stream (e.g., streaming logs, progressive media, async results).
- No separate domains or streaming infrastructure are needed.
- It's built on the open GRIP protocol and powered by the open-source Pushpin project.
HINT: If you're looking for a completely plug-and-play solution for pub/sub, see our Pub/Sub Application.
Enabling Fanout
Enable Fanout for local development
Fastly's local development environment integrates with a local installation of Pushpin, the open-source GRIP proxy server maintained by Fastly, to enable local testing of Fanout features in Compute applications.
To test Fanout features in a local development environment, you will need all of the following:
- Fastly CLI version 11.5.0 or newer
- Viceroy version 0.14.0 or newer (usually managed by Fastly CLI)
- a local installation of Pushpin
To enable Fanout features for local testing, set the following in the application's fastly.toml
file:
[local_server.pushpin]enable = true
Enable Fanout on your Fastly service
IMPORTANT: Fanout requires the purchase of a paid Fastly account with at least one Compute service created on it.
Fanout is an optional upgrade to Fastly service plans and is disabled by default. If you have not yet purchased access, contact sales. Alternatively, anyone assigned the role of superuser or engineer can enable a 30-day trial directly in the web interface in the service configuration settings of a Compute service, via the API, or using the Fastly CLI:
$ fastly products --enable=fanout
WARNING: Enabling or disabling Fanout immediately impacts all service versions, including the active one.
Quick start
There are many different ways of using Fanout, but to quickly see what it can do, use one of the fully-featured Compute starter kits:
Create a project from template
To create a Compute project with Fanout pre-configured, use fastly compute init:
- Rust
- JavaScript
- Go
$ fastly compute init --from=https://github.com/fastly/compute-starter-kit-rust-fanout
Run the service locally
Fastly's local development environment integrates with a local installation of Pushpin to enable local testing of Fanout features in Compute applications.
NOTE: Install Pushpin according to the instructions for your environment. See full requirements for testing Fanout features locally.
The starter kit project enables Fanout features for local testing by setting the following in the fastly.toml
file:
[local_server.pushpin]enable = true
Once the project has been initialized, run the project in your local development environment:
fastly compute serve
HINT: By default, the Fastly CLI searches your system path for pushpin
. Alternatively, specify the path to pushpin
when starting the test environment:
fastly compute serve --pushpin-path=/path/to/pushpin
Because Pushpin is enabled, you will see Pushpin initialization messages included in the startup output of the development server.
✓ Verifying fastly.toml✓ Identifying package name✓ Identifying toolchain✓ Running [scripts.build]✓ Creating package archiveSUCCESS: Built package (pkg/fanout-starter-kit.tar.gz)✓ Running local Pushpin2025-07-29T04:31:16.071246Z INFO [Pushpin] using config: /opt/homebrew/etc/pushpin/pushpin.conf2025-07-29T04:31:16.358935Z INFO [Pushpin] starting connmgr2025-07-29T04:31:16.360125Z INFO [Pushpin] starting proxy2025-07-29T04:31:16.360900Z INFO [Pushpin] starting handler✓ Running local server
Test locally
Once the service is running, it listens for HTTP requests at localhost:7676
. The publishing API is available at http://localhost:5561/publish/
.
In the starter kit project, requests with URLs beginning with /test/
trigger a "Fanout handoff" to itself via a backend named self
, defined in fastly.toml
:
[local_server.backends][local_server.backends.self]url = "http://localhost:7676/"override_host = "localhost:7676"
Thus, the starter kit is invoked again, this time through Fanout (indicated by the request header Grip-Sig
having a value). It creates long-lived connections for requests to /test/stream
, /test/sse
, /test/long-poll
, and /test/websocket
, subscribing clients to a channel called test
.
You can now test the starter kit using any of the supported transports:
- HTTP Streaming (incl. Server-Sent Events)
- HTTP Long polling
- WebSockets
To test the starter kit using HTTP Streaming, in a terminal window, make an HTTP request for /test/stream
:
$ curl -i "http://localhost:7676/test/stream"HTTP/1.1 200 OKcontent-type: text/plaindate: Tue, 23 Aug 2022 12:48:05 GMTconnection: Transfer-Encodingtransfer-encoding: chunked
You'll see output such as the above but you won't return to the shell prompt. Now, in another terminal window, run:
$ curl -d '{"items":[{"channel":"test","formats":{"http-stream":{"content": "hello\n"}}}]}' http://localhost:5561/publish/
The published data includes an http-stream
representation of your data, which Fanout can use for streaming connections. The event you published appears on your curl output:
hello
You can continue to publish more messages, and they will be appended to the streaming response.
Server-Sent Events (SSE) is a specialized application of HTTP Streaming that uses the Content-Type
value of text/event-stream
on the response. To test the starter kit using SSE, in a terminal window, make an HTTP request for /test/sse
:
$ curl -i "http://localhost:7676/test/sse"HTTP/1.1 200 OKcontent-type: text/event-streamdate: Tue, 23 Aug 2022 12:48:05 GMTconnection: Transfer-Encodingtransfer-encoding: chunked
Now, in another terminal window, run:
$ curl -d '{"items":[{"channel":"test","formats":{"http-stream":{"content": "event: message\ndata: {\"text\": \"hello world\"}\n\n"}}}]}' http://localhost:5561/publish/
The event you published appears on your curl output:
event: messagedata: {"text": "hello world"}
The pattern implemented in the starter kit sets up streams entirely at the edge and works best when logic in the Compute service determines the channels for a connecting client. In this setup, the origin handles only publishing events, while the Compute service negotiates the setup of streams.
Deploy to a Fastly service
You can compile and publish the starter kit project to a live Fastly service using fastly compute publish:
$ fastly compute publishCreate new service: [y/N] y
Domain: [some-funky-words.edgecompute.app]Backend (hostname or IP address, or leave blank to stop adding backends):
✓ Creating domain 'some-funky-words.edgecompute.app'...✓ Uploading package...✓ Activating version...
SUCCESS: Deployed package (service 0eBOC1x5Q0HHadAlpeKbvt, version 1)
Add a backend
Fanout communicates with a backend server to get instructions on what to do with each new connection. The starter kit is configured to use itself as the backend. To make use of this, add a backend called self
to your service that directs requests back to the service itself:
$ fastly backend create --name self -s {SERVICE_ID} --address {PUBLIC_DOMAIN} --port 443 --version latest --autoclone$ fastly service-version activate --version latest -s {SERVICE_ID}
The {SERVICE_ID}
and {PUBLIC_DOMAIN}
should be replaced by the values shown in the output from the publish
step.
IMPORTANT: If you use the web interface or API to create the backend, ensure to set a host header override if your server's hosting is name-based. Learn more.
Enable Fanout
Fanout is an optional upgrade to Fastly service plans and is disabled by default.
NOTE: See full requirements for enabling Fanout on your Fastly service.
Fanout can be enabled on an individual service in the web interface or by enabling the fanout
product using the product enablement API or the Fastly CLI:
$ fastly products --enable=fanout
WARNING: Enabling or disabling Fanout immediately impacts all service versions, including the active one.
Authenticating
To make the API calls to perform publishing actions, you'll need a Fastly API Token that has the global
scope for your service.
Test on Fastly
Once the starter kit is deployed to your Fastly service, you can now perform the same tests as in Test locally above, repeated below for convenience.
NOTE: Channel names are scoped to the Fastly service.
- HTTP Streaming (incl. Server-Sent Events)
- HTTP Long polling
- WebSockets
In one terminal window, make an HTTP request for /test/stream
:
$ curl -i "https://some-funky-words.edgecompute.app/test/stream"HTTP/2 200content-type: text/plainx-served-by: cache-lhr7380-LHRdate: Tue, 23 Aug 2022 12:48:05 GMT
You'll see output such as the above but you won't return to the shell prompt. Now, in another terminal window, run:
$ curl -H "Fastly-Key: {YOUR_FASTLY_TOKEN}" -d '{"items":[{"channel":"test","formats":{"http-stream":{"content": "hello\n"}}}]}' https://api.fastly.com/service/{SERVICE_ID}/publish/
The published data includes an http-stream
representation of your data, which Fanout can use for streaming connections. The event you published appears on your curl output:
hello
You can continue to publish more messages, and they will be appended to the streaming response.
Server-Sent Events (SSE) is a specialized application of HTTP Streaming that uses the Content-Type
value of text/event-stream
on the response.
The starter kit provides a test endpoint for SSE. Make a HTTP request for /test/sse
:
$ curl -i "https://some-funky-words.edgecompute.app/test/sse"HTTP/2 200content-type: text/event-streamx-served-by: cache-lhr7380-LHRdate: Tue, 23 Aug 2022 12:48:05 GMT
Now, in another terminal window, run:
$ curl -H "Fastly-Key: {YOUR_FASTLY_TOKEN}" -d '{"items":[{"channel":"test","formats":{"http-stream":{"content": "event: message\ndata: {\"text\": \"hello world\"}\n\n"}}}]}' https://api.fastly.com/service/{SERVICE_ID}/publish/
The event you published appears on your curl output:
event: messagedata: {"text": "hello world"}
Next steps
Now you have an operational Fanout message broker. Consider how you might want to modify this setup to suit your needs:
- Learn more about subscribing, including examples of the front-end JavaScript code you need to interact with streams.
- Learn more about publishing, including simple libraries that can abstract the complexity of message formatting for you.
- If you only need one kind of transport (e.g., WebSockets, and not SSE), feel free to remove the code that enables the other transports.
- If you prefer to have your origin server do the stream setup, then most of the edge code is no longer needed. See Connection setup below.
- If you intend to use the new service in production, you'll want to add at least one origin server and a domain, and consider how you want the Fastly platform to cache your non-streamed content.
Connection setup
Fanout connections are created by explicitly calling the appropriate handoff method in your preferred Compute language SDK. Fanout then queries a nominated backend to find out what to do with the request. It's up to the backend to tell Fanout to treat the request as a stream and to provide a list of channels that client should subscribe to.
The Compute application decides what kinds of requests to hand off to Fanout (3), and the backend application decides what channels to subscribe the client to (5). This backend can also be a Fastly service—it can even be the same service.
IMPORTANT: Fanout handoff cannot specify a dynamic backend. An approximate workaround that can be used is to perform a handoff to the service itself, and then in that inner call, to forward the request to a dynamic backend.
Responding to Fanout requests
Fanout communicates with backends by forwarding client requests and interpreting instructions in the response formatted using Generic Realtime Intermediary Protocol (GRIP). When a client request is handed off to Fanout, Fanout will forward the request to the backend specified in the handoff. The backend response can tell Fanout how to handle the connection lifecycle, using GRIP instructions.
- HTTP Streaming (incl. Server-Sent Events)
- HTTP Long polling
- WebSockets
Fanout forwards regular HTTP requests to the backend. Client request headers that are added, removed, or modified on your Request will be reflected in the Fanout handoff.
If the backend wants to use the request for HTTP streaming (including Server-Sent Events), it should use GRIP headers to instruct Fanout to hold that response as a stream:
HTTP/1.1 200 OKContent-Type: text/plainGrip-Hold: streamGrip-Channel: mychannelGrip-Channel: anotherchannel
The GRIP headers that are relevant to initiating HTTP streams are:
Grip-Hold
: Set tostream
to tell Fanout to deliver the headers of the response to the client immediately, and then deliver messages as they are published to subscribed channels.Grip-Channel
: A channel to subscribe the request to. Multiple Grip-Channel headers may be specified in the response, to subscribe multiple channels to the request.Grip-Keep-Alive
: Data to be sent to the client after a certain amount of activity passes. Thetimeout
parameter specifies the length of time a request must be idle before the keep alive data is sent (default 55 seconds). Theformat
parameter specifies the format of the keep alive data. Allowed values areraw
,cstring
, andbase64
(defaultraw
). For example, if a newline character should be sent to the client after 20 seconds of inactivity, the following header could be used:Grip-Keep-Alive: \n; format=cstring; timeout=20
.
Messages to be published to a client in a Grip-Hold: stream
state must have an http-stream
format available. Learn more about publishing.
Server-Sent Events (SSE) is a specialized application of HTTP Streaming that uses the Content-Type
value of text/event-stream
on the response.
HTTP/1.1 200 OKContent-Type: text/event-streamGrip-Hold: streamGrip-Channel: mychannel
Compliant Server-sent events clients (such as the EventSource
API built into web browsers) will send a Last-Event-ID
header with new connection requests. If you care about ensuring clients do not miss events during reconnects, consider parsing this header and including missed events in the initial response along with the Grip-Hold
header, allowing subsequent events provided via the publishing API to be appended by Fanout later to the same response.
Validating GRIP requests
If the backend is running on a public server, then it's a good idea to validate that the request is actually coming though Fastly Fanout. The Grip-Sig
header value can be used to do this. Grip-Sig
is provided as a JSON Web Token, and tokens signed by Fastly Fanout can be validated using the following public key. If the token cannot be fully verified for any reason, including expiration, then the backend should behave as if the header wasn't present.
This is the public key we use for signing GRIP requests:
-----BEGIN PUBLIC KEY-----MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAECKo5A1ebyFcnmVV8SE5On+8G81JyBjSvcrx4VLetWCjuDAmppTo3xM/zz763COTCgHfp/6lPdCyYjjqc+GM7sw==-----END PUBLIC KEY-----
Many language ecosystems provide libraries for validating JWTs. If your backend uses JavaScript, for example, the jsonwebtoken
module can be used:
import * as jwt from 'jsonwebtoken';const FANOUT_PUBLIC_KEY = `-----BEGIN PUB...`;
// Assuming ExpressJS or similarapp.get('/chat-stream/:user', (req, res) => { jwt.verify(req.header('Grip-Sig'), FANOUT_PUBLIC_KEY);});
NOTE: If your backend is running on JavaScript on Fastly Compute, the runtime does not support PEM-formatted keys at this time. Use the equivalent JSON Web Key (JWK) format instead:
const FANOUT_PUBLIC_KEY = {"kty": "EC","crv": "P-256","x": "CKo5A1ebyFcnmVV8SE5On-8G81JyBjSvcrx4VLetWCg","y": "7gwJqaU6N8TP88--twjkwoB36f-pT3QsmI46nPhjO7M"};You can import this key and then use it with a library such as
jose
to validate a JWT:
import * as jose from 'jose';const cryptoKey = await crypto.subtle.importKey('jwk', FANOUT_PUBLIC_KEY, { name: 'ECDSA', namedCurve: 'P-256' },false, ['verify']);const result = await jose.jwtVerify(req.headers.get('Grip-Sig'), cryptoKey);
What to hand off to Fanout
You should be selective about requests you hand off to Fanout (e.g., by checking its URL path or headers). Handing off a request that isn't intended to be a stream may still work, because Fanout will relay that request to origin, and if the response is not GRIP or WebSocket-over-HTTP, Fanout will simply relay it back to the client and close the connection. However, passing all requests through Fanout is not recommended, for a number of reasons:
- Only limited changes to requests (e.g., to request headers) will be reflected when handing off to Fanout.
- The request will not interact with the Fastly cache, so even content that is not intended to be streamed will not be cached.
- Responses from origin will be delivered directly to the client by Fanout, and will not be available to the Compute program.
As a result it usually makes sense to hand off requests only when they target a known path or set of paths on which you want to stream responses:
- Rust
- JavaScript
- Go
use fastly::{Error, Request};
fn main() -> Result<(), Error> { let req = Request::from_client();
if req.get_path().starts_with("/test/") { // Hand off stream requests to the stream backend return Ok(req.handoff_fanout("stream_backend")?) }
// Forward all non-stream requests to the primary backend Ok(req.send("primary_backend")?.send_to_client())}
Negotiating streams at the edge
Fanout separates the concerns of subscribing and publishing. This makes it possible to negotiate streams at the edge, potentially reducing latency, simplifying infrastructure, and improving flexibility in how streaming endpoints are scaled and secured.
To do this, configure the Fastly Compute service itself as the Fanout backend. The Fanout starter kits demonstrate this pattern by creating a backend named self
that points to the public domain name of the same Compute service, and referencing that backend when handing off to Fanout. In this configuration, the Compute program both handles the initial client request and responds to the stream negotiation request from Fanout.
This approach is suitable when:
- The number of channels is relatively small
- Clients explicitly specify which channels to subscribe to
- The publisher does not require visibility into subscriber state
In the following cases, this approach would not be suitable, and stream negotiation should be performed by your origin server:
- Subscriber state must be visible at the origin
- Stream authentication or long-polling is only implemented at the origin
- Stream determination cannot be made in advance
- Clients must not miss messages during reconnects
When Fanout relays requests to the backend, it preserves the original path and adds a Grip-Sig
header. These can be used together to distinguish streaming-related traffic and determine whether a request is relayed by Fanout or sent directly from a client.
- Rust
- JavaScript
- Go
use fastly::{Error, Request, Response};use fastly::http::StatusCode;
fn handle_fanout(req: Request, chan: &str) -> Response { Response::from_status(StatusCode::OK)}
fn main() -> Result<(), Error> { let req = Request::from_client();
// Request is a stream request if req.get_path().starts_with("/test/") { if req.get_header_str("Grip-Sig").is_some() { // Request is from Fanout return Ok(handle_fanout(req, "test").send_to_client()); }
// Not from Fanout, route back to self through Fanout first return Ok(req.handoff_fanout("self")?); }
// Forward all non-stream requests to the primary backend Ok(req.send("primary_backend")?.send_to_client())}
The handle_fanout
function (handleFanout
in JavaScript and Go) invoked in the example above is a user-provided function that should return a GRIP HTTP or a WebSockets-over-HTTP response. The Fanout starter kits contain example implementations.
Subscribing
Fanout is designed to allow push messaging to integrate seamlessly into your domain. When clients make HTTP requests or WebSocket connections that arrive at Fastly's network, what happens next depends on the instructions provided by your backend. These instructions can include subscribing the client to one or more channels.
For HTTP-based transports (such as Server-Sent Events and long polling), this is done with response headers. For example:
Grip-Hold: streamGrip-Channel: mychannel
For the WebSocket transport, this is done by sending GRIP control messages as part of a WebSockets-over-HTTP response. For example:
c:{"type": "subscribe", "channel": "mychannel"}
It's important to understand that clients don't assert their own subscriptions. Clients make arbitrary HTTP requests or send arbitrary WebSocket messages, and it is your backend that determines whether clients should be subscribed to anything. Your channel schema remains private between Fanout and your backend server, and in fact clients may not even be aware that publish-subscribe activities are occurring.
HINT: Your application's design may still allow for client requests to specify channel names. A path such as /stream/departure-KR4N81
to get a real time stream of departure status for a flight booking, for example, is passing the name of the desired channel in the path. If the backend deems the client to be entitled to data from that channel, it could extract this token from the path and pass it to Fanout in a GRIP subscription instruction.
If your client is a web browser, you will use JavaScript to initiate streaming requests to the backend:
- HTTP Streaming (incl. Server-Sent Events)
- HTTP Long polling
- WebSockets
Modern web browsers have support for reading from streams via the fetch function and ReadableStream interface:
const eventList = document.querySelector('ul');
const response = await fetch('/test/stream');const reader = response.body.getReader();while (true) { const { done, value } = await reader.read(); if (done) { return; }
const newElement = document.createElement("li"); newElement.textContent = value; eventList.appendChild(newElement);}
Web browsers have built-in support for Server-Sent Events via the EventSource API:
const evtSource = new EventSource('/test/sse');const eventList = document.querySelector('ul');
evtSource.onmessage = (event) => { const newElement = document.createElement("li"); newElement.textContent = `message: ${event.data}`; eventList.appendChild(newElement);};
If your SSE events include an id:
property, the EventSource will add a Last-Event-ID
header to each request, which can be used to deliver missed messages when a new stream begins.
Publishing
Messages are published to Fanout channels using the publishing API. To publish events, send an HTTP POST request to https://api.fastly.com/service/{SERVICE_ID}/publish/
. You'll need to authenticate with a Fastly API Token that has the global
scope for your service.
Messages can also be delivered during connection setup (often to provide events that the client missed while not connected), and in response to inbound WebSocket messages. Events delivered in this way go to the client making the request (or sending the inbound WebSocket message), and do not use pub/sub channel subscriptions.
IMPORTANT: Unlike other Fastly APIs, the publishing endpoint requires a trailing slash: publish/
.
Publish requests include the messages to be published in a JSON data model:
Property | Type | Description |
---|---|---|
items | Array | A list of messages to publish |
└─ [i] | Object | Each member of the array is a single message |
└─ id | String | A string identifier for the message. See de-duplicatioon. |
└─ prev-id | String | Identifier of the previous message that was published to the channel. See sequencing. |
└─ channel | String | The name of the Fanout channel to which to publish the message. One channel per message. |
└─ formats | Object | A set of representations of the message, suitable for different transports. |
└─ ws-message | Object | A message representation suitable for delivery to WebSockets clients. |
└─ content | String | Content of the WebSocket message. |
└─ content-bin | String | Base-64 encoded content of the WebSocket message (use instead of content if the message is not a string). |
└─ action | String | A publish action. |
└─ http-stream | Object | A message representation suitable for delivery to Server-Sent events clients. |
└─ content | String | Content of the SSE message. Must be compatible with the text/event-stream format. |
└─ content-bin | String | Base-64 encoded content of the SSE message (use instead of content if the message is not a string). |
└─ action | String | A publish action. |
└─ http-response | Object | A message representation suitable for delivery to Long-polling clients. |
└─ action | String | A publish action. |
└─ code | Number | HTTP status code to apply to the response. |
└─ reason | String | Informational label for HTTP status code (delivered only over HTTP/1.1) |
└─ headers | Object | A key-value map of headers to set on the response. |
└─ body | String | Complete body of the HTTP response to deliver. |
└─ body-bin | String | Base-64 encoded body content (use instead of content if the body is not a string). |
Minimally, a publish request must contain one message in at least one format, with the content
property (for http-stream
or ws-message
) or the body
property (for http-response
specified). An example of a valid publish payload is:
{ "items": [ { "channel": "test", "formats": { "ws-message": { "content": "hello" } } } ]}
This can be sent using curl as shown:
$ curl -H "Fastly-Key: {YOUR_FASTLY_TOKEN}" -d '{"items":[{"channel":"test","formats":{"ws-message":{"content":"hello"}}}]}' https://api.fastly.com/service/{SERVICE_ID}/publish/
WARNING: If you are migrating to Fastly Fanout from self-hosted Pushpin or fanout.io, you may be using a GRIP library in your server application. Some of these libraries currently are not compatible with Fastly Fanout. See Libraries and SDKs for a list of libraries that are compatible with Fastly Fanout.
Publish actions
Published items can optionally specify one of three actions:
send
: The included content should be delivered to subscribers. This is the default if unspecified.hint
: The content to be delivered to subscribers must be externally retrieved. No content is included in the published item.close
: The request or connection associated with the subscription should be ended/closed.
Sequencing
If Fanout receives a message with a prev-id
that doesn't match the id
of an earlier message, then Fanout will buffer it until we receive a message whose id
matches the value, at which point both messages will be delivered in the right order. If the expected message is never received, the buffered message will eventually be delivered anyway (around 5-10 seconds later).
De-duplication
If Fanout receives a message with an id
that we've seen already recently (within the last few seconds) the publish action will be accepted but no message will be created. This happens even if the message content is different from any prior messages which had the same id
.
This feature is typically useful if you have an architecture with redundant publish paths. For example, you could have two publisher processes handling event triggers and have both send each message for high availability. Fanout would receive every message twice, but only process each message once. If one of the publishers fails, messages would still be received from the other.
Limits
By default, messages are limited to 65,536 bytes for the “content” portion of the format being published. For the normal HTTP and WebSocket transports, the content size is the number of HTTP body bytes or WebSocket message bytes (TEXT frames converted to UTF-8).
Inbound WebSockets messages
Unlike HTTP-based push messaging (e.g. server-sent events), WebSockets is bidirectional. When clients send messages to Fastly over an already-established WebSocket, Fanout will make a WebSockets-over-HTTP request to the Fanout backend, with a TEXT
or BINARY
segment containing the message from the client.
POST /test/websocket HTTP/1.1Sec-WebSocket-Extensions: gripContent-Type: application/websocket-eventsAccept: application/websocket-events
TEXT 16\r\nHello from the client!\r\n
The response from the backend may include TEXT
or BINARY
segments, which will be delivered to the client that sent the message (disregarding the channel-based pub/sub brokering). TEXT
segments may also include GRIP control messages to instruct Fanout to modify the client stream, for example to change which channels it subscribes to.
HTTP/1.1 200 OKContent-Type: application/websocket-events
TEXT 0C\r\nYou said Hi!TEXT 45\r\nc:{"type": "subscribe", "channel": "additional-channel-subscription"}\r\n
The starter kits for Fanout include an example of handling inbound WebSockets messages by echoing the content of the message back to the client.
Local Testing
Fastly's local development environment integrates with a local installation of Pushpin to enable local testing of Fanout features in Compute applications.
NOTE: Install Pushpin according to the instructions for your environment. See full requirements for testing Fanout features locally.
To test Fanout features in the local development environment, include the pushpin
section in your project's fastly.toml
file:
[local_server.pushpin]enable = true
By default, the Fastly CLI searches your system path for pushpin
. See the fastly.toml reference for details on overriding this value.
In addition, set up any backends using the standard backends
section. For example:
[local_server.backends][local_server.backends.origin]url = "http://localhost:3000/"override_host = "localhost:3000"
When you run fastly compute serve
, the Fastly CLI configures Pushpin routes based on the defined backends and then starts Pushpin alongside your Compute application, enabling it for "Fanout handoff".
The publishing API is available at http://localhost:5561/publish/
. See the fastly.toml reference for details on overriding this value.
IMPORTANT: Specifying remote backends when testing Fanout handoff in the local development environment is valid. However, because the publishing API runs locally, it is often convenient to also locally run any backends or applications that need to test publishing.
Libraries and SDKs
Libraries exist for many languages and frameworks to make it easier to interact with GRIP. This includes common activities such as initializing streams, publishing, and parsing WebSocket-over-HTTP messages.
The following GRIP libraries are compatible with Fastly Fanout. See the instructions of each library for details on usage with Fastly Fanout.
- Python:
gripcontrol
, django-grip, django-eventstream - PHP:
fanout/grip
, laravel-grip - JavaScript:
@fanoutio/grip
, @fanoutio/serve-grip, @fanoutio/eventstream
The GRIP_URL
These libraries are often configured using a GRIP_URL
, which encodes the publishing endpoint and credentials for request validation and publishing.
It is usually set using environment variables, or in Compute, using the Secret Store, and is a recommended best practice for separating configuration between the local testing environment and your Fastly service.
Local testing:
GRIP_KEY
- Use:http://127.0.0.1:5561/
Fastly service:
GRIP_URL
- Use:https://api.fastly.com/service/<service-id>/?verify-iss=fastly:<service-id>&key=<fastly-api-token>
GRIP_VERIFY_KEY
- Use the following value:This is a string used to configure thebase64:LS0tLS1CRUdJTiBQVUJMSUMgS0VZLS0tLS0KTUZrd0V3WUhLb1pJemowQ0FRWUlLb1pJemowREFRY0RRZ0FFQ0tvNUExZWJ5RmNubVZWOFNFNU9uKzhHODFKeQpCalN2Y3J4NFZMZXRXQ2p1REFtcHBUbzN4TS96ejc2M0NPVENnSGZwLzZsUGRDeVlqanFjK0dNN3N3PT0KLS0tLS1FTkQgUFVCTElDIEtFWS0tLS0tverify-key
component ofGRIP_URL
, as it can get very long.
Best practices
To get the most out of using Fanout on Fastly, consider the following tips:
- Avoid stateful protocol designs: for example keeping a client's last received message position in the server instead of in the client. These patterns will work, but they will be hard to reason about. It's best if the client asserts its own state.
- Don't keep track of connections on the server: Very rarely is it important to know about connections, rather than users. If you're implementing presence detection, better to do at the user/device level using heartbeats independently of connections.