Why Fastly loves QUIC and HTTP/3
At Fastly, we're very intentional and selective about where we put our resources. And that means we care more about the kinds of projects we take on and the impact they have, rather than the quantity of projects. Which is why we’re thrilled to be so invested in QUIC, a new transport protocol for the internet that is more responsive, secure, and flexible than what the internet uses today.
QUIC is being standardized at the Internet Engineering Task Force (IETF) and our engineering team includes one of the IETF QUIC working group's chairs and an editor of the core document. We are also working on quicly, our own implementation of this new protocol. Let’s explore what QUIC is in a bit more detail and why we’re so excited to put our resources behind it.
Defining QUIC (and HTTP/3)
QUIC (that’s not an acronym) is a brand-new internet transport protocol, replacing the most commonly used transport today, TCP. TCP provides a way for clients and servers to communicate reliably in the face of packet loss, share resources gracefully between applications, and ensure that data is delivered in the same order it was sent.
Modern TCP is the result of decades of experimentation, experience, and extensions to the core protocol defined in RFC 793, published back in 1981. When the HTTP Working Group took on the task of producing HTTP/2 back in 2013, most of their focus was on optimising how HTTP uses TCP. The group’s goal was to eliminate head-of-line blocking, where an outstanding HTTP request effectively precludes the use of a connection until a response is received. To address this, HTTP/2 introduced multiplexing of requests, making it possible to use a single TCP connection much more efficiently.
However, the work to optimize HTTP exposed another problem: head-of-line blocking is present in TCP as well because TCP delivers data to the receiving application in order. If a packet is lost, the TCP receiver withholds all subsequently received packets on that connection until a retransmission of the lost packet is received. As a result, the application will have to wait for this retransmission to be received, even if it could have used the subsequently received packets (as is the case in a protocol like HTTP/2, where multiple transfers are happening at once, but only one TCP connection is used).
This head-of-line blocking problem in TCP manifests itself in a few different ways in HTTP/2. While HTTP/2 performs better than HTTP/1 in most situations, it isn’t as good when connections are bad — such as a high-loss connection, or one with a bit of loss and a lot of latency.
QUIC addresses this problem by moving the stream layer of HTTP/2 into a generic transport, to avoid head-of-line blocking both at the application and transport layers. It does that by building a new set of services like TCP’s on top of UDP. UDP doesn’t have TCP’s guarantees of in-order, reliable delivery and congestion control built into it; QUIC builds them into a new layer on top of it.
Additionally, QUIC incorporates an encryption scheme that reuses the machinery of TLS 1.3 to ensure that applications’ connections are confidential, authenticated, and integrity-protected. TLS is the same technology that protects the data of HTTPS, but in QUIC it protects both the data and the transport protocol itself. This approach makes QUIC more responsive than TCP by allowing it to begin data transfer earlier.
The first application for this new transport protocol is HTTP/3. This is the version of HTTP that is designed to take advantage of QUIC’s features. Importantly, HTTP/3 does not change HTTP semantics (URLs and HTTP header definitions) from HTTP/2. By using QUIC, HTTP/3 should address HTTP/2’s performance issues, while avoiding a requirement to change application (the entity using HTTP) code. This alone is an exciting improvement, since HTTP/3 is a drop-in replacement for HTTP/2.
But that’s just the start. What else is exciting about QUIC for internet users?
Made for the last mile
Latency-sensitive web services are growing, placing unprecedented demands on serving infrastructure. At the same time, there are a significant number of users behind poorly connected networks — a number which is likely to increase as more users join the internet globally, even though internet infrastructure has dramatically improved over the years.
QUIC is designed to improve performance by mitigating last-mile issues of loss and latency, especially benefiting poorly connected users more than those with already good connectivity. If it lives up to its promise, most websites and users, especially those in poorly served parts of the world — whether in Outback Australia, rural India, or South Pacific islands — will see better performance. We believe that QUIC’s value is likely to increase as internet penetration increases globally.
Google's experiences with their proprietary, pre-standardization version of QUIC show significant improvements in exactly these situations — networks with high loss and/or latency. Specifically, Google saw a 16.7% reduction in end-to-end search latency for the highest latency desktop connections, and a 14.3% reduction for the highest latency mobile ones. Considering that their search pages are some of the most highly optimized URLs out there, that's an impressive improvement. For video, they saw an 8.7% reduction in rebuffering for the most troublesome connections on mobile. These benefits come from QUIC’s latency optimized handshake, better loss recovery, and, importantly, from bypassing bugs and inefficiencies in TCP and in the internet ecosystem that’s built around TCP.
A new promise of agility
The transport protocol we use needs to adapt and evolve if it is to continue serving as an effective glue between increasingly demanding applications and the chaotic underlying internet. This is not easy today with TCP for two reasons.
First, TCP is typically implemented in the Operating System (OS) kernel of both clients and servers, and making changes to TCP requires kernel changes or upgrades. Kernel changes have system-wide impact, and they are typically cautiously and slowly deployed at servers.
Clients — even on mobile OSes that have more rapid upgrade cycles — are problematic because there are sizeable user populations that end up years behind the latest OS/kernel versions, and because many users never upgrade their OSes at all. As a result, even simple transport-level changes take years to deploy on the internet today.
QUIC is much better placed to evolve rapidly because it is built on top of UDP and will commonly run in userspace, making it possible to deploy rapid changes at both servers and clients.
Second, the internet ecosystem no longer allows TCP to change. The past two decades of internet growth have seen a staggering increase in the number of network elements on the path between a server and a client, such as TCP “optimization” boxes for example, that inspect or modify the TCP header. These interposing network elements, called middleboxes, often unwittingly disallow changes to TCP headers and behavior, even if the server and the client are both willing. As a result, the internet ecosystem has ossified TCP, meaning even small changes to TCP are now impossible to deploy without unintended consequences. TCP Fast Open is a stellar example of one such modification to TCP: eight years after it was first proposed, it is still not widely deployed, largely due to middleboxes.
This is also where QUIC’s design shines. QUIC encrypts and cryptographically authenticates both the transport data and the protocol itself, preventing middleboxes from inspecting or modifying the protocol header and future-proofing the protocol in the process. QUIC also has versioning built in (yes, there is no versioning in TCP), which gives us the freedom to consider deep and wide changes as the future requires. Versioning also allows us to consider alternative versions that we can build, tune, and deploy within our POPs, as we’ll now discuss.
Meet smarter POPs
We believe there are two key ways QUIC can improve connection handling and scalability at the edge while managing traffic inside Fastly’s Points of Presence (POPs, or data centers).
First, we are keen to experiment with various transport modifications, including congestion control and loss recovery, to improve traffic latency and efficiency between servers in our POPs. QUIC, being in userspace, can also integrate better and deeper with our tooling and logging infrastructure, making it easier to run and learn from experiments.
Second, QUIC introduces significant flexibility by divorcing a sender’s view of traffic from a receiver’s view. Without getting into the weeds here, separating these two essentially means that we can optimize our serving without requiring changes to the receivers. That is, we can shape the use of QUIC to efficiently fit our unique CDN and POP architecture, without requiring any changes at end users, potentially resulting in even lower latency to users. We are exploring ideas in this space and we are excited about the potential we see here!
And of course there’s a lot more in QUIC that should be of interest to CDNs and other server operators, no matter what their architecture: like how it encrypts so much more of the transport, thereby making users safer. Or how it enables stateless load balancing, as well as connection mobility, allowing a client to seamlessly move between networks — WiFi and cellular networks for instance — without losing connections.
What's next?
QUIC is still in the standardization process, and while there are currently more than 20 implementations of the IETF version of it (including ours), they’re all working furiously towards interoperability and full HTTP functionality. The Working Group expects the core drafts to be final this year, and we expect that the IETF flavour of QUIC will be available in browsers later in the year as well.
The advantages of QUIC still need to be proven for the diversity of websites and use cases outside of Google’s limited set. We suspect that there’s going to be an extended period where both server and browser implementations are going to need optimization, hardening, and operational experience. The end is in sight, but there’s work to be done to get there.
And that’s more than OK. We believe that the advantages to Fastly — and the internet overall — are worth investing in and waiting for. As support for HTTP/3 starts making its way into publicly available clients, we look forward to working with our customers on testing, deploying, and innovating together on this exciting new technology.