QUIC and Streams

QUIC and Streams

In my previous post, I talked about why QUIC matters to the future of the Internet. I'd also like to look a little more closely at why it matters to HTTP specifically.

Obviously, the benefits we're able to import from TCP developments will benefit HTTP immediately - the ability to transfer data on the first round trip is an enormous win for latency. But there is also (at least) one key way that HTTP/QUIC improves on HTTP/2 in an immediate and noticeable fashion.

Let's think for a moment about how loss and multiplexing interact. TCP provides an in-order bytestream -- that means when a packet is lost, the data that does arrive waits until the missing packet is retransmitted. Which is precisely what you want when you're dealing with a single stream which you're reading sequentially. However, HTTP/2 is multiplexing on top of that bytestream -- the later bytes might or might not be logically related to (and blocking) the missing earlier bytes. But TCP doesn't know that -- it sees a single stream.

This is almost certainly the root cause of the issue discussed in this excellent talk. As packet loss increases to non-negligible levels, HTTP/2 suffers. HTTP/1.1 is more resilient because packet loss only affects one active transfer, not all active transfers.

QUIC provides lightweight multistreaming inside the transport. It knows which packets contained data from which streams, and therefore can deliver each one in order -- and more importantly, can not delay unrelated streams when one stream suffers packet loss.

"Lightweight" means it's cheap to create new streams and close them. That means you can run an application over QUIC in one of two general modes:

  • Streams represent entire transactions, which are potentially large. This is the model used by HTTP/2 streams, in which each stream contains an HTTP request and response, either (or both) of which could be gigabytes in size.
  • Streams represent discrete messages, which are potentially small. This is the model being considered for some other protocols, though the mappings are currently semi-hypothetical.

The properties of the streams are easy to understand - each stream has mostly the behaviors that you're used to from a TCP connection. The streams are:

  • Reliable - data sent on the stream is guaranteed to be delivered (eventually) unless the stream is aborted.
  • In order - data sent on the stream is delivered in order to the application
  • Flow controlled - the receiver can control how quickly it will accept data from the sender

It's the properties of the connection that might surprise you if you're trying to build on it. In HTTP/2, since the in-order bytestream is underneath the multiplexing layer, there are also implicit ordering guarantees made between the streams. The header compression protocol in HTTP/2, HPACK, relies on this property to maintain a consistent state, something that the QUIC working group is still wrestling with how best to resolve.